<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Connelly</id>
	<title>Theory Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Connelly"/>
	<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Special:Contributions/Connelly"/>
	<updated>2026-05-01T04:38:45Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9098</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9098"/>
		<updated>2009-05-04T21:07:29Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Future plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
* Virtual brief paper (results with salvaged data) accepted at ED-MEDIA09:&lt;br /&gt;
** Connelly, J., &amp;amp; Katz, S. (in press). Toward more robust learning of physics via reflective dialogue extensions. To appear in &#039;&#039;Proceedings of ED-MEDIA 2009&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Connelly, J., &amp;amp; Katz, S. (2006). Intelligent dialogue support for physics problem solving: Some preliminary mixed results. &#039;&#039;Technology, Instruction, Cognition and Learning, 4&#039;&#039;, 1-29. Philadelphia, PA: Old City Publishing.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2005). When should dialogues in a scaffolded learning environment take place? In P. Kommers &amp;amp; G. Richards (Eds.), &#039;&#039;Proceedings of ED-MEDIA 2005&#039;&#039; (pp. 2850-2855). Norfolk, VA: AACE.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* resubmit rejected conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9097</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9097"/>
		<updated>2009-05-04T21:07:08Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Annotated bibliography */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
* Virtual brief paper (results with salvaged data) accepted at ED-MEDIA09:&lt;br /&gt;
** Connelly, J., &amp;amp; Katz, S. (in press). Toward more robust learning of physics via reflective dialogue extensions. To appear in &#039;&#039;Proceedings of ED-MEDIA 2009&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Connelly, J., &amp;amp; Katz, S. (2006). Intelligent dialogue support for physics problem solving: Some preliminary mixed results. &#039;&#039;Technology, Instruction, Cognition and Learning, 4&#039;&#039;, 1-29. Philadelphia, PA: Old City Publishing.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2005). When should dialogues in a scaffolded learning environment take place? In P. Kommers &amp;amp; G. Richards (Eds.), &#039;&#039;Proceedings of ED-MEDIA 2005&#039;&#039; (pp. 2850-2855). Norfolk, VA: AACE.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9096</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9096"/>
		<updated>2009-05-04T21:06:34Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Annotated bibliography */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
* Virtual brief paper on salvaged data accepted at ED-MEDIA09:&lt;br /&gt;
** Connelly, J., &amp;amp; Katz, S. (in press). Toward more robust learning of physics via reflective dialogue extensions. To appear in &#039;&#039;Proceedings of ED-MEDIA 2009&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Connelly, J., &amp;amp; Katz, S. (2006). Intelligent dialogue support for physics problem solving: Some preliminary mixed results. &#039;&#039;Technology, Instruction, Cognition and Learning, 4&#039;&#039;, 1-29. Philadelphia, PA: Old City Publishing.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2005). When should dialogues in a scaffolded learning environment take place? In P. Kommers &amp;amp; G. Richards (Eds.), &#039;&#039;Proceedings of ED-MEDIA 2005&#039;&#039; (pp. 2850-2855). Norfolk, VA: AACE.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9095</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=9095"/>
		<updated>2009-05-04T21:05:33Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Annotated bibliography */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
* Virtual brief paper accepted at ED-MEDIA09:&lt;br /&gt;
** Connelly, J., &amp;amp; Katz, S. (in press). Toward more robust learning of physics via reflective dialogue extensions. To appear in &#039;&#039;Proceedings of ED-MEDIA 2009&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Connelly, J., &amp;amp; Katz, S. (2006). Intelligent dialogue support for physics problem solving: Some preliminary mixed results. &#039;&#039;Technology, Instruction, Cognition and Learning, 4&#039;&#039;, 1-29. Philadelphia, PA: Old City Publishing.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2005). When should dialogues in a scaffolded learning environment take place? In P. Kommers &amp;amp; G. Richards (Eds.), &#039;&#039;Proceedings of ED-MEDIA 2005&#039;&#039; (pp. 2850-2855). Norfolk, VA: AACE.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8730</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8730"/>
		<updated>2008-12-22T22:51:35Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Connelly, J., &amp;amp; Katz, S. (2006). Intelligent dialogue support for physics problem solving: Some preliminary mixed results. &#039;&#039;Technology, Instruction, Cognition and Learning, 4&#039;&#039;, 1-29. Philadelphia, PA: Old City Publishing.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2005). When should dialogues in a scaffolded learning environment take place? In P. Kommers &amp;amp; G. Richards (Eds.), &#039;&#039;Proceedings of ED-MEDIA 2005&#039;&#039; (pp. 2850-2855). Norfolk, VA: AACE.&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8729</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8729"/>
		<updated>2008-12-22T22:50:36Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Despite being unable to perform many of our intended analyses, including examinations of group differences between our old Short KCDs and new Long KCDs, we were able to replicate our prior finding (Katz et al. 2007) that it was KCD completion, as opposed to the completion of target homework problems, that significantly (for most scores and subscores) improved post-test performance.  In other words, whether the KCDs were Short or Long, doing more of them better predicted post-test scores than did the solving of homework problems.&lt;br /&gt;
&lt;br /&gt;
Moreover, we found that both factors significantly improved scores on the final exam.  That is, the more KCDs and target homework problems students did, the better they performed on the final exam.  Although the effects of both factors were not statistically significant on their final course grades, the same trend existed, with KCDs being a stronger predictor of course grades that were homework problems.&lt;br /&gt;
&lt;br /&gt;
This marked the first time in our line of work with KCDs (Connelly &amp;amp; Katz, 2006; Katz et al., 2005, 2007) that learning benefits of our dialogues transferred to longer-term performance measures.  Future work could investigate the relative benefits of our Short, largely qualitative KCDs versus our Long KCDs with both qualitative and quantitative knowledge, as well as explicit ties between them.&lt;br /&gt;
&lt;br /&gt;
This study also has implications for other studies of learning gains from instructional interventions, which tend to show small (if any) effects.  Investigator attempts to “clean” their data, after being confronted with “cheating” and “gaming” behaviors, may be worthwhile in that they might increase the signal to noise ratio enough for intended (or perhaps even unintended) learning effects to emerge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8728</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8728"/>
		<updated>2008-12-22T22:48:18Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Extending Automated Dialogue Support for Robust Learning of Physics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
blah blah&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8727</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8727"/>
		<updated>2008-12-22T22:43:02Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
==== Final Exam &amp;amp; Course Grades ====&lt;br /&gt;
&lt;br /&gt;
A regression of exam score on CQPR, KCD completion counts, and problem completion counts was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .47, &#039;&#039;F&#039;&#039;(3, 68) = 20.17, &#039;&#039;p&#039;&#039; &amp;lt; .00001) and showed positive contributions of all three factors, but only CQPR had a statistically significant effect (&#039;&#039;p&#039;&#039; &amp;lt; .00001).  However, when I omitted CQPR from the model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .16, &#039;&#039;F&#039;&#039;(2, 72) = 7.08, &#039;&#039;p&#039;&#039; &amp;lt; .005), both KCD and problem completion counts reached statistical significance (&#039;&#039;p&#039;&#039;s &amp;lt; .05).  A regression of final grade on all three factors was significant (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .56, &#039;&#039;F&#039;&#039;(3, 53) = 22.10, &#039;&#039;p&#039;&#039; &amp;lt; .00001), but again only CQPR had a statistically significant effect (p &amp;lt; .00001).  When I omitted CQPR from this model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 55) = 3.99, &#039;&#039;p&#039;&#039; &amp;lt; .05), neither factor of interest reached significance, but KCD completion had a stronger marginal effect (&#039;&#039;p&#039;&#039; = .073) than did problem completion (&#039;&#039;p&#039;&#039; = .106).&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8726</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8726"/>
		<updated>2008-12-22T22:38:55Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Pre- and Post-Tests */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 54) = 14.82, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (&#039;&#039;p&#039;&#039; = .08) and problem completion was ns (t &amp;lt; 1).  After omitting CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .50, &#039;&#039;F&#039;&#039;(3, 57) = 18.74, &#039;&#039;p&#039;&#039; &amp;lt; .00001), the effect of KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .012) while that of problem completion remained &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .37, &#039;&#039;F&#039;&#039;(4, 54) = 7.96, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (&#039;&#039;p&#039;&#039; &amp;lt; .005); KCD completion reached marginal status (&#039;&#039;p&#039;&#039; = .07) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .35, &#039;&#039;F&#039;&#039;(3, 57) = 10.33, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .46, &#039;&#039;F&#039;&#039;(4, 54) = 11.53, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and CQPR were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (&#039;&#039;p&#039;&#039; = .015) only after dropping CQPR from the regression model (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .41, &#039;&#039;F&#039;&#039;(3, 57) = 13.31, &#039;&#039;p&#039;&#039; &amp;lt; .00001).  The effect of problem completion was &#039;&#039;ns&#039;&#039; in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on CQPR, number of KCDs completed, and number of target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .12, &#039;&#039;F&#039;&#039;(3, 55) = 2.59, &#039;&#039;p&#039;&#039; = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (&#039;&#039;p&#039;&#039; = .09).  However, omitting CQPR from the model resulted in a significant regression (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .13, &#039;&#039;F&#039;&#039;(2, 58) = 4.24, &#039;&#039;p&#039;&#039; &amp;lt; .05) with a significant effect of KCD completion (&#039;&#039;p&#039;&#039; = .016).  The effect of problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;t&#039;&#039; &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8725</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8725"/>
		<updated>2008-12-22T22:32:52Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to examine KCD and Andes logs in a much lower level of detail than for prior studies, in an attempt to determine whether each students&#039; list of assigned problems and dialogues represented viable data (i.e., if the problems and KCDs were &#039;&#039;legitimately&#039;&#039; completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for full credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
==== Pre- and Post-Tests ====&lt;br /&gt;
&lt;br /&gt;
Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed (R2 = .52, F(4, 54) = 14.82, p &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score and QPA were statistically significant (ps &amp;lt; .0005 &amp;amp; .05, respectively); KCD completion was marginal (p = .08) and problem completion was ns (t &amp;lt; 1).  After omitting QPA from the regression model (R2 = .50, F(3, 57) = 18.74, p &amp;lt; .00001), the effect of KCD completion reached statistical significance (p = .012) while that of problem completion remained ns (t &amp;lt; 1).&lt;br /&gt;
&lt;br /&gt;
Regressing post-test qualitative subscore on pre-test qualitative subscore, QPA, number of KCDs completed, and number of target problems completed (R2 = .37, F(4, 54) = 7.96, p &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore was statistically significant (p &amp;lt; .005); KCD completion reached marginal status (p = .07) only after dropping QPA from the regression model (R2 = .35, F(3, 57) = 10.33, p &amp;lt; .00001).  The effect of problem completion was ns (t &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
Regressing post-test quantitative subscore on pre-test quantitative subscore, QPA, number of KCDs completed, and number of target problems completed (R2 = .46, F(4, 54) = 11.53, p &amp;lt; .00001) showed positive contributions of all factors, but only pre-test subscore and QPA were statistically significant (ps &amp;lt; .0005 &amp;amp; .01, respectively); KCD completion reached statistical significance (p = .015) only after dropping QPA from the regression model (R2 = .41, F(3, 57) = 13.31, p &amp;lt; .00001).  The effect of problem completion was ns in both models.&lt;br /&gt;
&lt;br /&gt;
A marginal regression of normalized (Estes) gain scores on QPA, number of KCDs completed, and number of target problems completed (R2 = .12, F(3, 55) = 2.59, p = .062) showed positive contributions of all factors, but the only factor that was even marginal was KCD completion (p = .09).  However, omitting QPA from the model resulted in a significant regression (R2 = .13, F(2, 58) = 4.24, p &amp;lt; .05) with a significant effect of KCD completion (p = .016).  The effect of problem completion was ns (t &amp;lt; 1) in both models.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8724</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8724"/>
		<updated>2008-12-22T22:25:01Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Future plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to revisit KCD and Andes logs in a much lower level of detail, in an attempt to determine whether each students&#039; list of assigned problems and dialogues were viable (i.e., legitimately completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans to wrap up the project:&lt;br /&gt;
* write conference paper reporting updated findings on salvaged data&lt;br /&gt;
* write conference paper detailing the process by which we salvaged usable data&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8723</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8723"/>
		<updated>2008-12-22T22:23:33Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses showed marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of cheating and general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses were made possible only by the identification and omission of noisy data from our overall corpus.  This lengthy and painstaking process required us to revisit KCD and Andes logs in a much lower level of detail, in an attempt to determine whether each students&#039; list of assigned problems and dialogues were viable (i.e., legitimately completed).&lt;br /&gt;
&lt;br /&gt;
In the end, the various problems plaguing our data rendered impossible any comparisons between our two treatment conditions.  However, we were able to salvage enough data to compare student performance relative to degrees of viable problem and dialogue completion.  The salvaged exposure measures were the number of viable KCDs completed (vs. KCDs that were &amp;quot;passed through&amp;quot; with gibberish responses) and the number of viable problems completed (vs. those on which students likely &amp;quot;cheated&amp;quot;, as determined by answer-only &amp;quot;solutions&amp;quot; or minimal Andes inputs to attain a score of 50, the minimal criterion for credit used by one course instructor).&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8722</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8722"/>
		<updated>2008-12-22T22:12:37Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following student variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of viable target problems completed (before the post-test, or before the final exam)&lt;br /&gt;
* Number of viable (non-gibberish) dialogues completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8721</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8721"/>
		<updated>2008-12-22T22:09:49Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Dependent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;short-term retention&#039;&#039; via student performance on course exams that covered target topics (statics; translational dynamics, including circular motion; work-energy; power; and linear momentum, including impulse).  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8720</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8720"/>
		<updated>2008-12-22T22:06:36Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Dependent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
We had also intended to measure &#039;&#039;Short-term retention&#039;&#039; via student performance on course exams that covered target topics.  However, critical omissions in the data provided to us by course instructors rendered such measurements impossible.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8719</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8719"/>
		<updated>2008-12-22T22:02:18Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Dependent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score, and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8718</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8718"/>
		<updated>2008-12-22T22:00:44Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
Software errors and students&#039; circumvention of both the system and instructors’ scoring rubrics rendered our intended detailed analyses impossible.  However, despite the various manifestations of “cheating” and otherwise gaming the system that were discovered, we found that dialogue exposure significantly boosted gains on a post-test and on a largely quantitative final exam two months after the end of our intervention, providing evidence of some robust learning and transfer to related problem solving contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8667</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=8667"/>
		<updated>2008-12-03T22:28:41Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.&lt;br /&gt;
&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.&lt;br /&gt;
&lt;br /&gt;
[[Image:7lkcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:7lkcd.jpg&amp;diff=8666</id>
		<title>File:7lkcd.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:7lkcd.jpg&amp;diff=8666"/>
		<updated>2008-12-03T22:28:06Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=8525</id>
		<title>Reflective Dialogues (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=8525"/>
		<updated>2008-11-03T20:18:13Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Do Reflective Dialogues that Explicitly Target the “What? How? and Why (not)?” Knowledge of Physics Problem Solving Promote Expert-like Planning Ability? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz, John Connelly, Donald Treacy&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 3/1/06&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 6/30/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 67&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 750 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
One of the main differences between experts and novices in physics is that experts are more adept at identifying relevant principles and generating solution plans before starting to solve a problem (Chi, Glaser &amp;amp; Rees, 1982; Dufresne, Gerace, Hardiman, &amp;amp; Mestre, 1992; Priest &amp;amp; Lindsay, 1992).   We propose that this difference may be due to two reasons: (1) in traditional physics courses, students are not explicitly asked to plan or are given scaffolding to support planning during problem solving, and (2) many students lack the basic knowledge of physics concepts, principles, and procedures that is prerequisite for effective planning.    &lt;br /&gt;
&lt;br /&gt;
In this project, we tested the effectiveness of engaging students in reflective dialogues after they solve problems in [[Andes]] that explicitly target the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why (or why not) to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  The main hypothesis tested is that explicit training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, will be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
   &lt;br /&gt;
To test this hypothesis, we conducted a Physics LearnLab study in which we compared the effectiveness of an experimental version of Andes that engaged students in reflective “what? how? and why?” dialogues after physics problem solving ([[explicit instruction]] condition) with a control version that gave students additional problem-solving practice after they solved an Andes problem—e.g., by having them identify bugs in a sample student solution to a problem ([[implicit instruction]] condition).  We predicted that students in the explicit condition would outperform students in the implicit condition with respect to gain score from pre-test to post-test and scores on course exams that targeted far transfer.  Consistent with prior research, the dialogues promoted conceptual understanding of physics. However, measures of retention and transfer to problem-solving ability showed only a marginal effect of completing the reflective dialogues.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge—i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them—enhance students’ problem-solving ability?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Because experts develop schemata largely through a good deal of practice with solving various types of problems, traditional introductory college physics courses take a similar approach.  They are structured so that students are introduced to concepts, principles, and procedures in their text and through lectures, which they are then asked to apply to many problem-solving exercises.  The hope is that, with extensive practice, students will integrate [[procedural]] and [[conceptual knowledge]] and develop expert-like schemata and planning skills.  Unfortunately, many students don’t.  In addition to exiting these courses with lingering naïve misconceptions (e.g., Halloun &amp;amp; Hestenes, 1985; McDermott, 1984), they continue to solve problems largely by manipulating equations until the desired quantity is isolated—that is, by means-end analysis—instead of by identifying relevant principles and generating solution plans before starting to solve a problem, as experts do (Dufresne et al., 1992; Larkin, McDermott, Simon, &amp;amp; Simon, 1980; Priest &amp;amp; Lindsay, 1992).   &lt;br /&gt;
&lt;br /&gt;
We refer to the traditional approach to physics instruction described in the preceding paragraph as [[implicit instruction]], because it encourages the inductive development of abstractions (concepts, principles, and schemata) through repeated exposure to instances, instead of by explicitly reifying these abstractions (O&#039;Malley &amp;amp; Chamot, 1994).  In response to the limitations of implicit approaches to physics instruction that neither ask students to plan nor scaffold them in doing it, several instructional scientists have proposed methods that explicitly engage students in planning exercises (Dufresne et al., 1992; Leonard, Dufresne, &amp;amp; Mestre, 1996; Mestre, Dufresne, Gerace, &amp;amp; Hardiman, 1993).  These methods have met with modest success when tested mainly using high-achieving students (B or above in an introductory college physics course).  We suggest that the main reason that many students, even high achievers, are unable to use explicit planning methods effectively is that they lack the basic concepts, principles, and procedures that are prerequisites for effective planning.  &lt;br /&gt;
&lt;br /&gt;
This project tested the effectiveness of engaging students in reflective dialogues after they solve problems in Andes (e.g., Gertner &amp;amp; VanLehn, 2000) which explicitly targeted the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  Explicit instruction on these knowledge components proved to be effective and more efficient than giving students repeated practice in solving different types of problems.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students within each course section were block-randomly assigned to one of two dialogue conditions: standard [[Knowledge Construction Dialogues]] (KCDs) or control with no KCDs.  Control students worked on standard Andes problems only:&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
KCD students worked through KCDs following selected Andes problems:&lt;br /&gt;
[[Image:6kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
Students in the control condition were assigned an additional five Andes problems to attempt to better equate time on task.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypothesis tested was that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Analyses of pre- and post-test scores were more encouraging than for last year&#039;s study ([[Post-practice reflection (Katz)]]).  After omitting scores from one student with a post-test duration of less than two minutes, we were left with treatment and control groups of equal size (&#039;&#039;n&#039;&#039; = 33). We also re-classified two treatment subjects who did no dialogues as control subjects (effective treatment and control &#039;&#039;n&#039;&#039;s = 31 and 35, respectively), although these subjects were not able to access the five extra control-group Andes problems. As we had hoped, ANOVA showed no significant differences between the effective treatment and control groups on pre-test score (respective &#039;&#039;M&#039;&#039;s = 12.10 and 11.77; &#039;&#039;F&#039;&#039; &amp;lt; 1), but treatment subjects had higher mean post-test scores (17.97 vs. 15.57; &#039;&#039;F&#039;&#039;(1, 64) = 4.89, &#039;&#039;p&#039;&#039; = .031), mean raw gain scores (5.87 vs. 3.80; &#039;&#039;F&#039;&#039; = 5.62, &#039;&#039;p&#039;&#039; = .021), and mean Estes gain scores (0.330 vs. 0.208; &#039;&#039;F&#039;&#039; = 6.74, &#039;&#039;p&#039;&#039; = .012) than did control subjects. In short, without regard to problem-solving practice, subjects who did KCDs did significantly better on the post-test than those who did not.&lt;br /&gt;
&lt;br /&gt;
Student participation in both homework problem solving and dialogues was much improved over last year, with 21 of 34 control subjects (62%) and 23 of 33 treatment subjects (70%) finishing at least 80% of the assigned target problems (of 31 for control, 26 for treatment) prior to the post-test and with 25 treatment subjects (76%) finishing at least 80% of the 26 associated KCDs. However, participation was&lt;br /&gt;
still far from perfect; 7 treatment subjects (21%) completed half or fewer of the assigned KCDs on time, and 2 of them completed no target problems or KCDs on time.  We again treated KCD completion and target problem completion as continuous independent variables in regression analyses. Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed&lt;br /&gt;
(&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 61) = 16.70, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score, QPA, and KCD completion were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .001, .05, &amp;amp; .05, respectively); problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;p&#039;&#039; = .54). Therefore, across all subjects it was KCD completion, as opposed to target homework problem completion, that significantly predicted post-test performance.&lt;br /&gt;
&lt;br /&gt;
To measure retention and transfer, we also analyzed scores from an hourly exam administered by one instructor to his two sections (&#039;&#039;n&#039;&#039;=47). All problems on this exam were quantitative and covered a subset of the units targeted during the intervention.  Scores on this exam ranged from 176 to 382 (of 400), and were highly correlated with&lt;br /&gt;
pre- and post-test scores; &#039;&#039;r&#039;&#039;s(45) = .54 and .71, &#039;&#039;p&#039;&#039;s &amp;lt; .0001 and .00001. However,&lt;br /&gt;
ANOVAs showed no differences between groups (Fs &amp;lt; 1), and regression of subscores on QPA, KCDs completed, and target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .54, &#039;&#039;F&#039;&#039;(3, 43) = 16.58,&lt;br /&gt;
&#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors but a significant effect of only QPA (&#039;&#039;p&#039;&#039; &amp;lt; .00001). Therefore, student performance on the hourly exam was not significantly affected by either KCD completion or target problem completion.&lt;br /&gt;
&lt;br /&gt;
In summary, student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed prior to the post-test. However, neither measure had a significant effect on student performance on an exam that focused on&lt;br /&gt;
problem-solving ability.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  More students took the learning-by doing path than in our prior study, perhaps because this year instructors &#039;&#039;required&#039;&#039; them to do so (i.e., this time there were negative consequences for avoiding them).&lt;br /&gt;
&lt;br /&gt;
Analyses of retention and transfer to problem-solving ability did not reveal a significant effect of the reflective dialogues. We had hoped that the capstone dialogues that treatment students completed before selected problems, at the end of each unit, would support problem-solving ability. It is possible that the intervention consisted of too few capstone dialogues (only five) to observe any effect.&lt;br /&gt;
&lt;br /&gt;
What seems to be the critical feature of post-practice reflection (PPR) in supporting conceptual understanding is the explicit instruction that it provides in domain knowledge. PPR may help students to fill in knowledge gaps, resolve misconceptions,&lt;br /&gt;
and abstract from the case at hand so that they are better prepared to engage in constructive activity (e.g., self-explanation) in future problems. Preliminary research comparing Andes (which encourages implicit learning of problem-solving strategies) with Pyrenees, a system that teaches problem-solving strategies explicitly, also&lt;br /&gt;
suggests an advantage for explicit instruction (VanLehn et al., 2004). Further research is needed to identify the mechanisms that drive learning from reflective dialogue, and to increase its potential to enhance problem-solving ability in addition to conceptual knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Full paper accepted at AIED 2007:&lt;br /&gt;
**Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007). Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes. In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Chi, M. T. H., Glaser, R., &amp;amp; Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), &#039;&#039;Advances in the Psychology of Human Intelligence, Vol. 1&#039;&#039; (pp. 7-75). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Dufresne, R. J., Gerace, P., Hardiman, T., &amp;amp; Mestre, J. P. (1992). Constraining novices to perform expertlike analyses: Effects on schema acquisition. &#039;&#039;The Journal of the Learning Sciences, 2&#039;&#039; (3), 307-331.&lt;br /&gt;
*Gertner, A. S., &amp;amp; VanLehn, K. (2000). Andes: A coached problem solving environment for physics. In G. Gauthier, C. Frasson, &amp;amp; K. VanLehn (Eds.), &#039;&#039;ITS 2000: Proceedings of the 5th International Conference on Intelligent Tutoring Systems&#039;&#039; (pp. 133-142). Berlin: Springer-Verlag.&lt;br /&gt;
*Larkin, J., McDermott, J., Simon, D. P., &amp;amp; Simon, H. A. (1980). Expert and novice performance in solving physics problems. &#039;&#039;Science, 208&#039;&#039;, 1335-1342.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*Mestre, J. P., Dufresne, R. J., Gerace, W. J., &amp;amp; Hardiman, P. T. (1993). Promoting skilled problem-solving behavior among beginning physics students. &#039;&#039;Journal of Research in Science Teaching, 30&#039;&#039;, 303-317.&lt;br /&gt;
*O’Malley, M., and Chamot, A. (1994). &#039;&#039;The CALLA Handbook&#039;&#039;. Reading, MA: Addison-Wesley.&lt;br /&gt;
*Priest, A. G., &amp;amp; Lindsay, R. O. (1992). New light on novice-expert differences in physics problem solving. &#039;&#039;British Journal of Psychology, 83&#039;&#039;, 389-405.&lt;br /&gt;
*VanLehn, K., Bhembe, D., Chi, M., Lynch, C., Schulze, K., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2004). Implicit versus explicit learning of strategies in a nonprocedural cognitive skill. In J. C. Lester, R. M. Vicari, &amp;amp; F. Paraguacu, (Eds.), &#039;&#039;Intelligent Tutoring Systems: 7th International Conference&#039;&#039; (pp. 521-530). Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans for January 2008 - December 2008:&lt;br /&gt;
* write journal article expanding AIED conference paper&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:6kcd.jpg&amp;diff=8524</id>
		<title>File:6kcd.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:6kcd.jpg&amp;diff=8524"/>
		<updated>2008-11-03T20:17:05Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8523</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8523"/>
		<updated>2008-11-03T20:16:42Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions, with reflective KCDs presented after selected Andes problems: &lt;br /&gt;
* control (Ctrl), with no reflection questions and KCDs; just the standard Andes problems:&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
&lt;br /&gt;
* canned text response (CTR), with free-response KCDs followed by pre-scripted expert responses; students&#039; responses were not given any positive or negative feedback:&lt;br /&gt;
[[Image:5ctr.jpg]]&lt;br /&gt;
&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicated what parts of the student&#039;s response were incomplete or incorrect and tried to elicit the correct aspects:&lt;br /&gt;
[[Image:5kcd.jpg]]&lt;br /&gt;
&lt;br /&gt;
* limited mixed-initiative KCDs (MIX), same as the standard KCDs but with on-demand hypertext question-and-answer links after selected tutor questions:&lt;br /&gt;
[[Image:5mix.jpg]]&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8522</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8522"/>
		<updated>2008-11-03T20:10:54Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions, with reflective KCDs presented after selected Andes problems: &lt;br /&gt;
* control (Ctrl), with no reflection questions and KCDs; just the standard Andes problems:&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
* canned text response (CTR), with free-response KCDs followed by pre-scripted expert responses; students&#039; responses were not given any positive or negative feedback:&lt;br /&gt;
[[Image:5ctr.jpg]]&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicated what parts of the student&#039;s response were incomplete or incorrect and tried to elicit the correct aspects:&lt;br /&gt;
[[Image:5kcd.jpg]]&lt;br /&gt;
* limited mixed-initiative KCDs (MIX), same as the standard KCDs but with on-demand hypertext question-and-answer links after selected tutor questions:&lt;br /&gt;
[[Image:5mix.jpg]]&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8521</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8521"/>
		<updated>2008-11-03T20:05:34Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions: &lt;br /&gt;
* control (Ctrl), with no reflection questions and KCDs; just standard Andes problems:&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
* canned text response (CTR), with free-response KCDs (after each target Andes problem) followed by pre-scripted expert responses; students&#039; responses were not given any positive/negative feedback:&lt;br /&gt;
[[Image:5ctr.jpg]]&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicate what parts of the student&#039;s response was incomplete or incorrect and try to elicit the correct aspects:&lt;br /&gt;
[[Image:5kcd.jpg]]&lt;br /&gt;
* limited mixed-initiative KCDs (MIX), same as the standard KCDs but with on-demand hypertext question-and-answer links after selected tutor questions:&lt;br /&gt;
[[Image:5mix.jpg]]&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5mix.jpg&amp;diff=8520</id>
		<title>File:5mix.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5mix.jpg&amp;diff=8520"/>
		<updated>2008-11-03T20:01:12Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5kcd.jpg&amp;diff=8519</id>
		<title>File:5kcd.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5kcd.jpg&amp;diff=8519"/>
		<updated>2008-11-03T20:00:33Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5ctr.jpg&amp;diff=8518</id>
		<title>File:5ctr.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:5ctr.jpg&amp;diff=8518"/>
		<updated>2008-11-03T19:59:42Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8382</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8382"/>
		<updated>2008-10-09T19:51:10Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions: &lt;br /&gt;
* control (Ctrl), with no reflection questions and KCDs;&lt;br /&gt;
[[Image:E1b-50.jpg]]&lt;br /&gt;
* canned text response (CTR), with free-response KCDs followed by pre-scripted expert responses; students&#039; responses were not given any positive/negative feedback.&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicate what parts of the student&#039;s response was incomplete or incorrect and try to elicit the correct aspects; and &lt;br /&gt;
* limited mixed-initiative (MIX), same as the standard KCDs, but with on-demand hypertext question-and-answer links.&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b-25.jpg&amp;diff=8380</id>
		<title>File:E1b-25.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b-25.jpg&amp;diff=8380"/>
		<updated>2008-10-09T19:49:54Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b-50.jpg&amp;diff=8377</id>
		<title>File:E1b-50.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b-50.jpg&amp;diff=8377"/>
		<updated>2008-10-09T19:47:53Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b.JPG&amp;diff=8373</id>
		<title>File:E1b.JPG</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:E1b.JPG&amp;diff=8373"/>
		<updated>2008-10-09T19:45:07Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8370</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=8370"/>
		<updated>2008-10-09T19:28:27Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions: &lt;br /&gt;
* control (Ctrl), with no reflection questions and KCDs; &lt;br /&gt;
* canned text response (CTR), with free-response KCDs followed by pre-scripted expert responses; students&#039; responses were not given any positive/negative feedback.&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicate what parts of the student&#039;s response was incomplete or incorrect and try to elicit the correct aspects; and &lt;br /&gt;
* limited mixed-initiative (MIX), same as the standard KCDs, but with on-demand hypertext question-and-answer links.&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7848</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7848"/>
		<updated>2008-04-15T21:33:54Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Future plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Interactive_Communication&amp;diff=7847</id>
		<title>Interactive Communication</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Interactive_Communication&amp;diff=7847"/>
		<updated>2008-04-15T21:29:24Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Questioning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= The PSLC Interactive Communication cluster =&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
The studies in the Interactive Communication deal primarily with learning environments where there are two interacting, communicating agents, one of which is the student.  The other [[agent]] is typically a second student, a human tutor or a tutoring system.  They communicate, either in a natural language or a formal language, such as mathematical expression or menus.  We are trying to find out why such instructional, dyadic, interactive communication is sometimes highly effective and sometimes less effective.  Sometimes we study highly constrained forms of communication in order to vary isolated aspects, and sometimes we compare whole forms of communciation.  Our hypothesis is simply that interactive communication is effective if it guides students to attend to the right [[knowledge components]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;center&amp;gt;[[Image:ic-theory.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Background and Significance ==&lt;br /&gt;
Although instructional dialogue has been studied in classrooms (e.g., Lave &amp;amp; Wenger, 1991; Leinhardt, 1990) and workplaces (e.g., Hutchins, 1995; Nunes, Schliemann &amp;amp; Carraher, 1993), we are focusing on more tractable albeit still complex situations: &#039;&#039;dyadic&#039;&#039; instructional dialogues, namely dialogues between: (a) a human tutor and a human student, (b) two human students, or (c) A computer tutor and a human student. Moreover, the dialogue are task-oriented (Grosz &amp;amp; Sidner, 1986) in that the participants are working together on a task rather than simply conversing with no shared goals or with opposing goals.&lt;br /&gt;
 &lt;br /&gt;
Early studies focused on the structure of dyadic instructional dialogue (e.g., Fox, 1993; Graesser, Person &amp;amp; Magliano, 1995; MacArthur, Stasz, &amp;amp; Zmuidzinas, 1990).  When later studies compared the learning that occurred during dialogue vs.  less interactive instruction (e.g., VanLehn, Graesser et al., 2007; Katz, Connelly &amp;amp; Allbritton, 2003; Evens &amp;amp; Michael, 2006; Cohen, Kulik &amp;amp; Kulik, 1982), they found surprisingly mixed results.  Only 60% of the studies showed that interactive communication caused larger learning gains than less interactive instruction. &lt;br /&gt;
&lt;br /&gt;
The interactive communication cluster is undertaking the next step in this important line of research by investigating when different types of interactive communication are effective and why.  Sometimes we compare highly constrained forms of communication in order to vary isolated aspects, and sometimes we compare constrained interactive communication to passive communication (e.g., reading).&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
See [[:Category:Interactive Communication|Interactive Communication Glossary]]&lt;br /&gt;
&lt;br /&gt;
== Research question ==&lt;br /&gt;
What properties of interactive communication promote robust learning?&lt;br /&gt;
&lt;br /&gt;
== Independent variables ==&lt;br /&gt;
The independent variables (also called Treatment Variables) of the IC cluster appear as column headers in the matrix above.  They are listed here with links to their glossary entries.&lt;br /&gt;
&lt;br /&gt;
* [[Collaboration]]&lt;br /&gt;
&lt;br /&gt;
* [[Vicarious learning]]&lt;br /&gt;
&lt;br /&gt;
* [[Collaboration scripts]]&lt;br /&gt;
&lt;br /&gt;
* [[Deep/Reflection questions]] &lt;br /&gt;
&lt;br /&gt;
* [[Instructional explanation]]&lt;br /&gt;
&lt;br /&gt;
* [[Prompted Self-explanation]]&lt;br /&gt;
&lt;br /&gt;
* [[Tutoring feedback]]&lt;br /&gt;
&lt;br /&gt;
* [[Error correction support]]&lt;br /&gt;
&lt;br /&gt;
== Dependent variables ==&lt;br /&gt;
Measures of normal and robust learning.&lt;br /&gt;
&lt;br /&gt;
== Hypothesis ==&lt;br /&gt;
Our central hypothesis is just a special case of the [[Knowledge component hypothesis]]: interactive communication is effective if it guides students to attend to the right [[knowledge components]].   The key words here are “guide” and “attend” because they may oppose each other.   A dialogue that strongly guides the student may also cause the student to disengage and thus not attend to the knowledge component even if the student’s dialogue partner mentions them.  On the other hand, an unguided dialogue may increase the student’s engagement but may skirt around the right knowledge components.  That is, the [[assistance dilemma]] surfaces as the degree of &#039;&#039;learner control&#039;&#039; (a term from the older educational literature) or &#039;&#039;student initiativ&#039;&#039;e (a nearly synonymous term from the natural language dialogue literature).&lt;br /&gt;
&lt;br /&gt;
== Explanation ==&lt;br /&gt;
If we view a short episode of interactive communication as a [[learning event space]], there could be three reasons why one treatment might be more effective than another:  &lt;br /&gt;
&lt;br /&gt;
(1) The learning event spaces might have different paths with different content.  For instance, if one person contributes critical information that the other person lacks, then their joint learning event space has paths that are absent in the learning event space of the second person if that person were working solo.  That is, the &#039;&#039;topology&#039;&#039; of one space might be better than the topology of the other.&lt;br /&gt;
&lt;br /&gt;
(2) If the learning event spaces in the two conditions are the same, then the interactive communication treatment might cause the students to traverse different paths than the control students.  That is, the &#039;&#039;path choices&#039;&#039; of one treatment might be better than the path choices of the other.&lt;br /&gt;
&lt;br /&gt;
(3) If the learning event spaces are the same and the students take the same paths, they still might learn more in one condition than another because of the way that they traversed the path.  For instance, having a partner observe the student as the student traverse a path might cause the student to be more attentive to details and to remember more.  That is, the &#039;&#039;path effects&#039;&#039; might differ in the treatment vs. the control.&lt;br /&gt;
&lt;br /&gt;
== Descendents ==&lt;br /&gt;
&lt;br /&gt;
=== Collaboration ===&lt;br /&gt;
When and how does collaboration between peers can increase robust learning? Problem solving, example studying and many other activities can be done alone, in pairs, or in pairs with various kinds of assistance, such as collaboration scripts. From the standpoint of an individual learner, having a partner offers more [[assistance]] than working alone, and having a partner plus other scaffolding offer even more assistance.   Thus, the [[Assistance Hypothesis]] predicts an interaction between various forms of peer collaboration and students&#039; prior competence.&lt;br /&gt;
&lt;br /&gt;
*[[Craig_observing|Learning from Problem Solving while Observing Worked Examples (Craig Gadgil, &amp;amp; Chi)]]&lt;br /&gt;
&lt;br /&gt;
*[[Hausmann_Diss|The effects of elaborative dialog on problem solving and learning (Hausmann &amp;amp; Chi, 2005)]]&lt;br /&gt;
&lt;br /&gt;
*[[Hausmann_Study2|The effects of interaction on robust learning (Hausmann &amp;amp; VanLehn, 2007)]]&lt;br /&gt;
&lt;br /&gt;
*[[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
*[[Walker_A_Peer_Tutoring_Addition|Collaborative Extensions to the Cognitive Tutor Algebra: A Peer Tutoring Addition (Walker, McLaren, Koedinger, &amp;amp; Rummel)]]&lt;br /&gt;
&lt;br /&gt;
*[[McLaren_et_al_-_Conceptual_Learning_in_Chemistry|Supporting Conceptual Learning in Chemistry through Collaboration Scripts and Adaptive, Online Support (McLaren, Rummel, Harrer, Spada, &amp;amp; Pinkwart)]]&lt;br /&gt;
&lt;br /&gt;
=== Questioning ===&lt;br /&gt;
When and how can asking the student questions increase the student&#039;s robust learning?  What kinds of questions are best?  &lt;br /&gt;
&lt;br /&gt;
*[[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
&lt;br /&gt;
*[[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
&lt;br /&gt;
*[[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly, &amp;amp; Treacy, 2006)]]&lt;br /&gt;
&lt;br /&gt;
*[[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
&lt;br /&gt;
*[[Self-explanation: Meta-cognitive vs. justification prompts|Self-explanation: Meta-cognitive vs. justification prompts (Hausmann, van de Sande, Gershman, &amp;amp; VanLehn, 2008]])&lt;br /&gt;
&lt;br /&gt;
*[[FrenchCulture|Understanding culture from film (Ogan, Aleven &amp;amp; Jones)]] [Also relevant to Refinement &amp;amp; Fluency, Explicit instruction and manipulations of attention &amp;amp; discrimination]&lt;br /&gt;
&lt;br /&gt;
=== Tell vs. elicit ===&lt;br /&gt;
When a tutor knows that something needs to be said, she or he must decide whether to &#039;&#039;tell&#039;&#039; it to the tutee, try to &#039;&#039;elicit&#039;&#039; it from the tutee via a question or prompt, or just &#039;&#039;wait&#039;&#039; and hope that the tutee says it.  Similarly, if a tutor knows that something needs to be done, the tutor can do it, elicit the action from the student or just wait.  An instructional designer faces the same choices.  For each thing that needs to be said or done in the instructional dialogue, should the tutor or the student be made responsible for it?  For instance, should the tutoring system point out errors to the students or should the students detect their errors?  In general, assistance is higher when the tutor does a portion of the instructional activity than when the student does it.&lt;br /&gt;
 &lt;br /&gt;
*[[Hausmann_Study|Does it matter who generates the explanations? (Hausmann &amp;amp; VanLehn, 2006)]]&lt;br /&gt;
&lt;br /&gt;
*[[Student_Uncertainty|Does Treating Student Uncertainty as a Learning Impasse Improve Learning in Spoken Dialogue Tutoring? (Forbes-Riley &amp;amp; Litman)]]&lt;br /&gt;
&lt;br /&gt;
*[[The_Help_Tutor__Roll_Aleven_McLaren|Tutoring a meta-cognitive skill: Help-seeking (Roll, Aleven &amp;amp; McLaren)]] [Also in the Refinement &amp;amp; Fluency cluster, and relevant to Knowledge Component analysis]&lt;br /&gt;
&lt;br /&gt;
*[[The self-correction of speech errors (McCormick, O’Neill &amp;amp; Siskin)]]&lt;br /&gt;
&lt;br /&gt;
*[[Using Elaborated Explanations to Support Geometry Learning (Aleven &amp;amp; Butcher)]]&lt;br /&gt;
&lt;br /&gt;
*[[Plateau_study|What is the optimal level of interaction during learning from problem solving? (Hausmann, van de Sande, &amp;amp; VanLehn, 2008)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cluster]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7846</id>
		<title>Reflective Dialogues (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7846"/>
		<updated>2008-04-15T21:28:37Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Connections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Do Reflective Dialogues that Explicitly Target the “What? How? and Why (not)?” Knowledge of Physics Problem Solving Promote Expert-like Planning Ability? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz, John Connelly, Donald Treacy&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 3/1/06&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 6/30/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 67&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 750 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
One of the main differences between experts and novices in physics is that experts are more adept at identifying relevant principles and generating solution plans before starting to solve a problem (Chi, Glaser &amp;amp; Rees, 1982; Dufresne, Gerace, Hardiman, &amp;amp; Mestre, 1992; Priest &amp;amp; Lindsay, 1992).   We propose that this difference may be due to two reasons: (1) in traditional physics courses, students are not explicitly asked to plan or are given scaffolding to support planning during problem solving, and (2) many students lack the basic knowledge of physics concepts, principles, and procedures that is prerequisite for effective planning.    &lt;br /&gt;
&lt;br /&gt;
In this project, we tested the effectiveness of engaging students in reflective dialogues after they solve problems in [[Andes]] that explicitly target the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why (or why not) to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  The main hypothesis tested is that explicit training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, will be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
   &lt;br /&gt;
To test this hypothesis, we conducted a Physics LearnLab study in which we compared the effectiveness of an experimental version of Andes that engaged students in reflective “what? how? and why?” dialogues after physics problem solving ([[explicit instruction]] condition) with a control version that gave students additional problem-solving practice after they solved an Andes problem—e.g., by having them identify bugs in a sample student solution to a problem ([[implicit instruction]] condition).  We predicted that students in the explicit condition would outperform students in the implicit condition with respect to gain score from pre-test to post-test and scores on course exams that targeted far transfer.  Consistent with prior research, the dialogues promoted conceptual understanding of physics. However, measures of retention and transfer to problem-solving ability showed only a marginal effect of completing the reflective dialogues.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge—i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them—enhance students’ problem-solving ability?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Because experts develop schemata largely through a good deal of practice with solving various types of problems, traditional introductory college physics courses take a similar approach.  They are structured so that students are introduced to concepts, principles, and procedures in their text and through lectures, which they are then asked to apply to many problem-solving exercises.  The hope is that, with extensive practice, students will integrate [[procedural]] and [[conceptual knowledge]] and develop expert-like schemata and planning skills.  Unfortunately, many students don’t.  In addition to exiting these courses with lingering naïve misconceptions (e.g., Halloun &amp;amp; Hestenes, 1985; McDermott, 1984), they continue to solve problems largely by manipulating equations until the desired quantity is isolated—that is, by means-end analysis—instead of by identifying relevant principles and generating solution plans before starting to solve a problem, as experts do (Dufresne et al., 1992; Larkin, McDermott, Simon, &amp;amp; Simon, 1980; Priest &amp;amp; Lindsay, 1992).   &lt;br /&gt;
&lt;br /&gt;
We refer to the traditional approach to physics instruction described in the preceding paragraph as [[implicit instruction]], because it encourages the inductive development of abstractions (concepts, principles, and schemata) through repeated exposure to instances, instead of by explicitly reifying these abstractions (O&#039;Malley &amp;amp; Chamot, 1994).  In response to the limitations of implicit approaches to physics instruction that neither ask students to plan nor scaffold them in doing it, several instructional scientists have proposed methods that explicitly engage students in planning exercises (Dufresne et al., 1992; Leonard, Dufresne, &amp;amp; Mestre, 1996; Mestre, Dufresne, Gerace, &amp;amp; Hardiman, 1993).  These methods have met with modest success when tested mainly using high-achieving students (B or above in an introductory college physics course).  We suggest that the main reason that many students, even high achievers, are unable to use explicit planning methods effectively is that they lack the basic concepts, principles, and procedures that are prerequisites for effective planning.  &lt;br /&gt;
&lt;br /&gt;
This project tested the effectiveness of engaging students in reflective dialogues after they solve problems in Andes (e.g., Gertner &amp;amp; VanLehn, 2000) which explicitly targeted the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  Explicit instruction on these knowledge components proved to be effective and more efficient than giving students repeated practice in solving different types of problems.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students within each course section were block-randomly assigned to one of two dialogue conditions: standard [[Knowledge Construction Dialogues]] (KCDs) or control with no KCDs.  Students in the control condition were assigned an additional five Andes problems to attempt to better equate time on task.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypothesis tested was that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Analyses of pre- and post-test scores were more encouraging than for last year&#039;s study ([[Post-practice reflection (Katz)]]).  After omitting scores from one student with a post-test duration of less than two minutes, we were left with treatment and control groups of equal size (&#039;&#039;n&#039;&#039; = 33). We also re-classified two treatment subjects who did no dialogues as control subjects (effective treatment and control &#039;&#039;n&#039;&#039;s = 31 and 35, respectively), although these subjects were not able to access the five extra control-group Andes problems. As we had hoped, ANOVA showed no significant differences between the effective treatment and control groups on pre-test score (respective &#039;&#039;M&#039;&#039;s = 12.10 and 11.77; &#039;&#039;F&#039;&#039; &amp;lt; 1), but treatment subjects had higher mean post-test scores (17.97 vs. 15.57; &#039;&#039;F&#039;&#039;(1, 64) = 4.89, &#039;&#039;p&#039;&#039; = .031), mean raw gain scores (5.87 vs. 3.80; &#039;&#039;F&#039;&#039; = 5.62, &#039;&#039;p&#039;&#039; = .021), and mean Estes gain scores (0.330 vs. 0.208; &#039;&#039;F&#039;&#039; = 6.74, &#039;&#039;p&#039;&#039; = .012) than did control subjects. In short, without regard to problem-solving practice, subjects who did KCDs did significantly better on the post-test than those who did not.&lt;br /&gt;
&lt;br /&gt;
Student participation in both homework problem solving and dialogues was much improved over last year, with 21 of 34 control subjects (62%) and 23 of 33 treatment subjects (70%) finishing at least 80% of the assigned target problems (of 31 for control, 26 for treatment) prior to the post-test and with 25 treatment subjects (76%) finishing at least 80% of the 26 associated KCDs. However, participation was&lt;br /&gt;
still far from perfect; 7 treatment subjects (21%) completed half or fewer of the assigned KCDs on time, and 2 of them completed no target problems or KCDs on time.  We again treated KCD completion and target problem completion as continuous independent variables in regression analyses. Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed&lt;br /&gt;
(&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 61) = 16.70, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score, QPA, and KCD completion were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .001, .05, &amp;amp; .05, respectively); problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;p&#039;&#039; = .54). Therefore, across all subjects it was KCD completion, as opposed to target homework problem completion, that significantly predicted post-test performance.&lt;br /&gt;
&lt;br /&gt;
To measure retention and transfer, we also analyzed scores from an hourly exam administered by one instructor to his two sections (&#039;&#039;n&#039;&#039;=47). All problems on this exam were quantitative and covered a subset of the units targeted during the intervention.  Scores on this exam ranged from 176 to 382 (of 400), and were highly correlated with&lt;br /&gt;
pre- and post-test scores; &#039;&#039;r&#039;&#039;s(45) = .54 and .71, &#039;&#039;p&#039;&#039;s &amp;lt; .0001 and .00001. However,&lt;br /&gt;
ANOVAs showed no differences between groups (Fs &amp;lt; 1), and regression of subscores on QPA, KCDs completed, and target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .54, &#039;&#039;F&#039;&#039;(3, 43) = 16.58,&lt;br /&gt;
&#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors but a significant effect of only QPA (&#039;&#039;p&#039;&#039; &amp;lt; .00001). Therefore, student performance on the hourly exam was not significantly affected by either KCD completion or target problem completion.&lt;br /&gt;
&lt;br /&gt;
In summary, student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed prior to the post-test. However, neither measure had a significant effect on student performance on an exam that focused on&lt;br /&gt;
problem-solving ability.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  More students took the learning-by doing path than in our prior study, perhaps because this year instructors &#039;&#039;required&#039;&#039; them to do so (i.e., this time there were negative consequences for avoiding them).&lt;br /&gt;
&lt;br /&gt;
Analyses of retention and transfer to problem-solving ability did not reveal a significant effect of the reflective dialogues. We had hoped that the capstone dialogues that treatment students completed before selected problems, at the end of each unit, would support problem-solving ability. It is possible that the intervention consisted of too few capstone dialogues (only five) to observe any effect.&lt;br /&gt;
&lt;br /&gt;
What seems to be the critical feature of post-practice reflection (PPR) in supporting conceptual understanding is the explicit instruction that it provides in domain knowledge. PPR may help students to fill in knowledge gaps, resolve misconceptions,&lt;br /&gt;
and abstract from the case at hand so that they are better prepared to engage in constructive activity (e.g., self-explanation) in future problems. Preliminary research comparing Andes (which encourages implicit learning of problem-solving strategies) with Pyrenees, a system that teaches problem-solving strategies explicitly, also&lt;br /&gt;
suggests an advantage for explicit instruction (VanLehn et al., 2004). Further research is needed to identify the mechanisms that drive learning from reflective dialogue, and to increase its potential to enhance problem-solving ability in addition to conceptual knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Full paper accepted at AIED 2007:&lt;br /&gt;
**Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007). Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes. In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Chi, M. T. H., Glaser, R., &amp;amp; Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), &#039;&#039;Advances in the Psychology of Human Intelligence, Vol. 1&#039;&#039; (pp. 7-75). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Dufresne, R. J., Gerace, P., Hardiman, T., &amp;amp; Mestre, J. P. (1992). Constraining novices to perform expertlike analyses: Effects on schema acquisition. &#039;&#039;The Journal of the Learning Sciences, 2&#039;&#039; (3), 307-331.&lt;br /&gt;
*Gertner, A. S., &amp;amp; VanLehn, K. (2000). Andes: A coached problem solving environment for physics. In G. Gauthier, C. Frasson, &amp;amp; K. VanLehn (Eds.), &#039;&#039;ITS 2000: Proceedings of the 5th International Conference on Intelligent Tutoring Systems&#039;&#039; (pp. 133-142). Berlin: Springer-Verlag.&lt;br /&gt;
*Larkin, J., McDermott, J., Simon, D. P., &amp;amp; Simon, H. A. (1980). Expert and novice performance in solving physics problems. &#039;&#039;Science, 208&#039;&#039;, 1335-1342.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*Mestre, J. P., Dufresne, R. J., Gerace, W. J., &amp;amp; Hardiman, P. T. (1993). Promoting skilled problem-solving behavior among beginning physics students. &#039;&#039;Journal of Research in Science Teaching, 30&#039;&#039;, 303-317.&lt;br /&gt;
*O’Malley, M., and Chamot, A. (1994). &#039;&#039;The CALLA Handbook&#039;&#039;. Reading, MA: Addison-Wesley.&lt;br /&gt;
*Priest, A. G., &amp;amp; Lindsay, R. O. (1992). New light on novice-expert differences in physics problem solving. &#039;&#039;British Journal of Psychology, 83&#039;&#039;, 389-405.&lt;br /&gt;
*VanLehn, K., Bhembe, D., Chi, M., Lynch, C., Schulze, K., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2004). Implicit versus explicit learning of strategies in a nonprocedural cognitive skill. In J. C. Lester, R. M. Vicari, &amp;amp; F. Paraguacu, (Eds.), &#039;&#039;Intelligent Tutoring Systems: 7th International Conference&#039;&#039; (pp. 521-530). Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans for January 2008 - December 2008:&lt;br /&gt;
* write journal article expanding AIED conference paper&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=7845</id>
		<title>Post-practice reflection (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Post-practice_reflection_(Katz)&amp;diff=7845"/>
		<updated>2008-04-15T21:28:20Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Connections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Post-practice reflection in a first-year physics course: Does mixed-initiative interaction support robust learning better than tutor-led interaction or canned text? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PI&#039;&#039;&#039; || Sandra Katz&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039;&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Post-Docs:&#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 1/1/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 12/31/05&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 123&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 500 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || No; Andes data still incompatible&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
We conducted an [[in vivo experiment]] within the PSLC Physics LearnLab to address two questions about reflection on recent problem-solving activity within [[Andes]]: (1) Does [[post-practice reflection]] support robust learning of physics—that is, students’ ability to [[transfer]] what they learned during instruction to novel situations? and (2) What is the preferred delivery mode for post-practice reflection in intelligent tutoring systems?  We compared pre-test to post-test learning gains of students who were randomly assigned to one of four conditions that represent points on a scale of increasingly generative student activity: (1) a control group who solved a set of problems in Andes without any reflective activity, (2) a self-explanation group, who solved Andes problems, responded to several “[[reflection questions]]” (Lee &amp;amp; Hutchison, 1998) after each problem and received a canned, expert-generated explanation that they were prompted to compare with their response, (3) a “tutor-led interaction” group, who solved the same Andes problems and reflection questions (RQs) as the didactic group, but the questions were embedded in [[Knowledge Construction Dialogues]]  (KCD’s) that guided students in generating a correct response, and (4) a “scaffolded mixed initiative KCD group,” which solved the same problems and responded to the same reflection questions as the other reflection groups, embedded in similar KCD’s as those presented to the “tutor-led interaction” group, with the added support of follow-up questioning menus that allowed students to take initiative. &lt;br /&gt;
	&lt;br /&gt;
Unfortunately, participation in the three experimental conditions was too low to compare the effectiveness of these treatments.  However, a yoked pairs analysis revealed that the more reflection questions that students did, of any type, the better they did, with respect to both post-test scores and pre-test to post-test gain scores.  Similarly, regression analysis revealed that the number of dialogues a student completed had a significant positive effect on post-test score, independent of the number of problems he or she completed.  These findings provide empirical support for post-practice reflection in an ITS that is a central component of a course.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
Do reflection questions after physics problem solving support robust learning?  Is more interactive reflection, between the automated tutor and student, more effective in supporting robust learning than less interactive reflection?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Despite the well-established role that post-practice, reflective dialogue plays in apprentice-style learning (Collins, Brown, &amp;amp; Newman, 1989), few studies have been conducted to test their effectiveness.  Hence, there is little empirical support for incorporating post-practice reflection in courses led by a human teacher or within intelligent tutoring systems (ITSs).  Nor is there much guidance on how to implement reflection effectively, especially within the constraints of natural-language understanding and generation capabilities.&lt;br /&gt;
&lt;br /&gt;
Research by Lee and Hutchison (1998) focused on the effectiveness of reflection questions posed after students studied worked examples on balancing chemistry equations in a computer-based learning environment.  This study found that reflection questions enhanced students’ problem-solving ability.  However, the role of reflection in promoting conceptual understanding was not addressed; nor did the study investigate whether reflection questions after problem-solving exercises (as opposed to example studying) support learning.  &lt;br /&gt;
&lt;br /&gt;
Two laboratory studies conducted by Katz and her colleagues addressed these questions.   In a longitudinal analysis of student performance on avionics tasks in Sherlock 2 (e.g., Katz et al., 1998), Katz, O’Donnell and Kay (2000) found that discussions that took place between an avionics expert and two students were more effective in resolving misconceptions when they were distributed across problem solving and debrief than were discussions that took place during problem solving alone.  However, due to constraints inherent in the research setting, there was no control group in this study—that is, avionics trainees who did not experience debrief led by a domain expert—and no instrument to measure performance gains.   A follow-up study by Katz, Allbritton, &amp;amp; Connelly (2003) addressed these limitations in a different domain, first-year college physics.  Forty-six students taking college physics solved problems in Andes in one of three conditions: with no reflection questions after problem solving (control group), with reflection questions discussed with human tutors, or with the same reflection questions followed by canned feedback (without a human tutor).  A comparison of pre-test and post-test scores was conducted to measure learning gains. The main result was that students learned more with reflection questions than without, but the two conditions with reflection questions (canned feedback and human tutoring) did not differ significantly.&lt;br /&gt;
&lt;br /&gt;
The current study is significant because it validated the results of these laboratory studies in an actual classroom setting.  Specifically, it showed that post-practice reflection supports learning, as measured by pre-test to post-test learning gain scores.  Due to low participation, however, this experiment did not shed light on the question of which modality of reflection is most effective.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near transfer ([[normal post-test]]) and far [[transfer]] (robust learning) items.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exam that covered target topic (work and energy), 1-2 weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students were block-randomly assigned to one of four dialogue conditions: &lt;br /&gt;
* control (Ctrl), with no relection questions and KCDs; &lt;br /&gt;
* canned text response (CTR), with free-response KCDs followed by pre-scripted expert responses; Studnet&#039;s responses were not given a positive/negative feedback.&lt;br /&gt;
* standard KCDs (KCD), with shorter-answer questions and tutor responses that indicate what parts of the student&#039;s response was incomplete or incorrect and try to elicit the correct aspects; and &lt;br /&gt;
* limited mixed-initiative (MIX), same as the standard KCDs, but with on-demand hypertext question-and-answer links.&lt;br /&gt;
&lt;br /&gt;
However, there were no significant differences between categorical condition assignments, even after reclassifying as Ctrl subjects those in the three treatment conditions who saw no KCDs and after collapsing the KCD and MIX conditions (due to minimal usage of mixed-initiative links, for the most part the two conditions were equivalent).  We therefore treated KCD and problem completion as continuous variables.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
*Number of problems completed before the post-test was administered&lt;br /&gt;
*Number of reflection questions that the student completed&lt;br /&gt;
*CQPR—grade point average&lt;br /&gt;
*College major grouping&lt;br /&gt;
*Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
*Because responding to reflection questions is a form of “active learning,” students in any of the three experimental conditions (self-explanation of canned text, standard KCD’s, tutor-led KCDs) should outperform students in the control (no-reflection) condition.  This hypothesis was supported.&lt;br /&gt;
*The more interactive the reflection modality the better, so mixed-initiative &amp;gt; standard KCD &amp;gt; self-explanation of canned text.  We were unable to test this hypothesis, due to low participation.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
*&#039;&#039;Gain scores summary&#039;&#039;:&lt;br /&gt;
**There were no significant differences in mean gain score (or in pre-test and post-test scores) by condition (4 conditions; 3 treatment, 1 control) or by collapsed effective condition (Ctrl, CTR, KCD).&lt;br /&gt;
**Yoked pairs analysis, comparing students from the same major groups with identical pre-tests scores (and minimal CQPR disparity) who completed no reflection questions with students who completed five or more questions (&#039;&#039;n&#039;&#039; = 19 pairs), showed that “treated” subjects tended to out-gain “untreated” subjects.&lt;br /&gt;
*&#039;&#039;Regression analysis summary&#039;&#039;:&lt;br /&gt;
**The number of reflection questions completed had a significant positive effect on post-test scores.&lt;br /&gt;
*&#039;&#039;Exam score summary&#039;&#039;:&lt;br /&gt;
**For the final exam, there was no significant impact of the number of reflection questions completed.&lt;br /&gt;
**For one course section’s hourly exam on work and energy, there was a significant, positive impact of the number of reflection questions completed, but only when CQPR was dropped from the model.  For the other section’s hourly exam, “treated” subjects significantly outperformed “untreated” subjects, despite being outnumbered 3 to 1.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  We are unable to determine why students chose not to take the learning-by doing path.  Perhaps doing the reflection questions was too difficult for some students, but we suspect that this was not the case.  A more likely explanation is that students did not have the time to complete these questions, and there were no negative consequences for avoiding them.  In the follow-up study that we are currently running, students are required to complete the reflection questions in order to get credit for completing a problem.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Poster presentation at the 2006 American Association for Physics Teachers (AAPT) Conference&lt;br /&gt;
* Full-paper accepted at AIED 2007&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Collins, A., Brown, J. S., &amp;amp; Newman, S. E. (1989). Cognitive apprenticeship: Teaching the craft of reading, writing and mathematics. In L. B. Resnick (Ed.), &#039;&#039;Knowing, learning and instruction: Essays in honor of Robert Glaser&#039;&#039; (pp. 543-494). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Lee, A. Y., &amp;amp; Hutchison, L. (1998). Improving learning from examples through reflection. &#039;&#039;Journal of Experimental Psychology: Applied, 4&#039;&#039; (3), 187-210.&lt;br /&gt;
*Katz, S., &amp;amp; Allbritton, D., &amp;amp; Connelly, J. (2003). Going beyond the problem given: How human tutors use post-solution discussions to support transfer. &#039;&#039;International Journal of Artificial Intelligence and Education, 13&#039;&#039; (1), 79-116.&lt;br /&gt;
*Katz, S., Lesgold, A., Hughes, E., Peters, D., Eggan, G., Gordin, M., &amp;amp; Greenberg, L. (1998). Sherlock II: An intelligent tutoring system built upon the LRDC tutor framework. In C. P. Bloom &amp;amp; R. B. Loftin (Eds.), &#039;&#039;Facilitating the development and use of interactive learning environments&#039;&#039; (pp. 227-258). Mahwah, NJ: Erlbaum.&lt;br /&gt;
*Katz, S., O’Donnell, G., &amp;amp; Kay, H. (2000). An approach to analyzing the role and structure of reflective dialogue. &#039;&#039;International Journal of Artificial Intelligence and Education, 11&#039;&#039;, 320-343.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7844</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7844"/>
		<updated>2008-04-15T21:27:44Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Connections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Interactive_Communication&amp;diff=7843</id>
		<title>Interactive Communication</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Interactive_Communication&amp;diff=7843"/>
		<updated>2008-04-15T21:26:07Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Questioning */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= The PSLC Interactive Communication cluster =&lt;br /&gt;
&lt;br /&gt;
== Abstract ==&lt;br /&gt;
The studies in the Interactive Communication deal primarily with learning environments where there are two interacting, communicating agents, one of which is the student.  The other [[agent]] is typically a second student, a human tutor or a tutoring system.  They communicate, either in a natural language or a formal language, such as mathematical expression or menus.  We are trying to find out why such instructional, dyadic, interactive communication is sometimes highly effective and sometimes less effective.  Sometimes we study highly constrained forms of communication in order to vary isolated aspects, and sometimes we compare whole forms of communciation.  Our hypothesis is simply that interactive communication is effective if it guides students to attend to the right [[knowledge components]].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;center&amp;gt;[[Image:ic-theory.jpg]]&amp;lt;/center&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Background and Significance ==&lt;br /&gt;
Although instructional dialogue has been studied in classrooms (e.g., Lave &amp;amp; Wenger, 1991; Leinhardt, 1990) and workplaces (e.g., Hutchins, 1995; Nunes, Schliemann &amp;amp; Carraher, 1993), we are focusing on more tractable albeit still complex situations: &#039;&#039;dyadic&#039;&#039; instructional dialogues, namely dialogues between: (a) a human tutor and a human student, (b) two human students, or (c) A computer tutor and a human student. Moreover, the dialogue are task-oriented (Grosz &amp;amp; Sidner, 1986) in that the participants are working together on a task rather than simply conversing with no shared goals or with opposing goals.&lt;br /&gt;
 &lt;br /&gt;
Early studies focused on the structure of dyadic instructional dialogue (e.g., Fox, 1993; Graesser, Person &amp;amp; Magliano, 1995; MacArthur, Stasz, &amp;amp; Zmuidzinas, 1990).  When later studies compared the learning that occurred during dialogue vs.  less interactive instruction (e.g., VanLehn, Graesser et al., 2007; Katz, Connelly &amp;amp; Allbritton, 2003; Evens &amp;amp; Michael, 2006; Cohen, Kulik &amp;amp; Kulik, 1982), they found surprisingly mixed results.  Only 60% of the studies showed that interactive communication caused larger learning gains than less interactive instruction. &lt;br /&gt;
&lt;br /&gt;
The interactive communication cluster is undertaking the next step in this important line of research by investigating when different types of interactive communication are effective and why.  Sometimes we compare highly constrained forms of communication in order to vary isolated aspects, and sometimes we compare constrained interactive communication to passive communication (e.g., reading).&lt;br /&gt;
&lt;br /&gt;
== Glossary ==&lt;br /&gt;
See [[:Category:Interactive Communication|Interactive Communication Glossary]]&lt;br /&gt;
&lt;br /&gt;
== Research question ==&lt;br /&gt;
What properties of interactive communication promote robust learning?&lt;br /&gt;
&lt;br /&gt;
== Independent variables ==&lt;br /&gt;
The independent variables (also called Treatment Variables) of the IC cluster appear as column headers in the matrix above.  They are listed here with links to their glossary entries.&lt;br /&gt;
&lt;br /&gt;
* [[Collaboration]]&lt;br /&gt;
&lt;br /&gt;
* [[Vicarious learning]]&lt;br /&gt;
&lt;br /&gt;
* [[Collaboration scripts]]&lt;br /&gt;
&lt;br /&gt;
* [[Deep/Reflection questions]] &lt;br /&gt;
&lt;br /&gt;
* [[Instructional explanation]]&lt;br /&gt;
&lt;br /&gt;
* [[Prompted Self-explanation]]&lt;br /&gt;
&lt;br /&gt;
* [[Tutoring feedback]]&lt;br /&gt;
&lt;br /&gt;
* [[Error correction support]]&lt;br /&gt;
&lt;br /&gt;
== Dependent variables ==&lt;br /&gt;
Measures of normal and robust learning.&lt;br /&gt;
&lt;br /&gt;
== Hypothesis ==&lt;br /&gt;
Our central hypothesis is just a special case of the [[Knowledge component hypothesis]]: interactive communication is effective if it guides students to attend to the right [[knowledge components]].   The key words here are “guide” and “attend” because they may oppose each other.   A dialogue that strongly guides the student may also cause the student to disengage and thus not attend to the knowledge component even if the student’s dialogue partner mentions them.  On the other hand, an unguided dialogue may increase the student’s engagement but may skirt around the right knowledge components.  That is, the [[assistance dilemma]] surfaces as the degree of &#039;&#039;learner control&#039;&#039; (a term from the older educational literature) or &#039;&#039;student initiativ&#039;&#039;e (a nearly synonymous term from the natural language dialogue literature).&lt;br /&gt;
&lt;br /&gt;
== Explanation ==&lt;br /&gt;
If we view a short episode of interactive communication as a [[learning event space]], there could be three reasons why one treatment might be more effective than another:  &lt;br /&gt;
&lt;br /&gt;
(1) The learning event spaces might have different paths with different content.  For instance, if one person contributes critical information that the other person lacks, then their joint learning event space has paths that are absent in the learning event space of the second person if that person were working solo.  That is, the &#039;&#039;topology&#039;&#039; of one space might be better than the topology of the other.&lt;br /&gt;
&lt;br /&gt;
(2) If the learning event spaces in the two conditions are the same, then the interactive communication treatment might cause the students to traverse different paths than the control students.  That is, the &#039;&#039;path choices&#039;&#039; of one treatment might be better than the path choices of the other.&lt;br /&gt;
&lt;br /&gt;
(3) If the learning event spaces are the same and the students take the same paths, they still might learn more in one condition than another because of the way that they traversed the path.  For instance, having a partner observe the student as the student traverse a path might cause the student to be more attentive to details and to remember more.  That is, the &#039;&#039;path effects&#039;&#039; might differ in the treatment vs. the control.&lt;br /&gt;
&lt;br /&gt;
== Descendents ==&lt;br /&gt;
&lt;br /&gt;
=== Collaboration ===&lt;br /&gt;
When and how does collaboration between peers can increase robust learning? Problem solving, example studying and many other activities can be done alone, in pairs, or in pairs with various kinds of assistance, such as collaboration scripts. From the standpoint of an individual learner, having a partner offers more [[assistance]] than working alone, and having a partner plus other scaffolding offer even more assistance.   Thus, the [[Assistance Hypothesis]] predicts an interaction between various forms of peer collaboration and students&#039; prior competence.&lt;br /&gt;
&lt;br /&gt;
*[[Craig_observing|Learning from Problem Solving while Observing Worked Examples (Craig Gadgil, &amp;amp; Chi)]]&lt;br /&gt;
&lt;br /&gt;
*[[Hausmann_Diss|The effects of elaborative dialog on problem solving and learning (Hausmann &amp;amp; Chi, 2005)]]&lt;br /&gt;
&lt;br /&gt;
*[[Hausmann_Study2|The effects of interaction on robust learning (Hausmann &amp;amp; VanLehn, 2007)]]&lt;br /&gt;
&lt;br /&gt;
*[[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
*[[Walker_A_Peer_Tutoring_Addition|Collaborative Extensions to the Cognitive Tutor Algebra: A Peer Tutoring Addition (Walker, McLaren, Koedinger, &amp;amp; Rummel)]]&lt;br /&gt;
&lt;br /&gt;
*[[McLaren_et_al_-_Conceptual_Learning_in_Chemistry|Supporting Conceptual Learning in Chemistry through Collaboration Scripts and Adaptive, Online Support (McLaren, Rummel, Harrer, Spada, &amp;amp; Pinkwart)]]&lt;br /&gt;
&lt;br /&gt;
=== Questioning ===&lt;br /&gt;
When and how can asking the student questions increase the student&#039;s robust learning?  What kinds of questions are best?  &lt;br /&gt;
&lt;br /&gt;
*[[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
&lt;br /&gt;
*[[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2005)]]&lt;br /&gt;
&lt;br /&gt;
*[[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2006)]]&lt;br /&gt;
&lt;br /&gt;
*[[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2007)]]&lt;br /&gt;
&lt;br /&gt;
*[[Self-explanation: Meta-cognitive vs. justification prompts|Self-explanation: Meta-cognitive vs. justification prompts (Hausmann, van de Sande, Gershman, &amp;amp; VanLehn, 2008]])&lt;br /&gt;
&lt;br /&gt;
*[[FrenchCulture|Understanding culture from film (Ogan, Aleven &amp;amp; Jones)]] [Also relevant to Refinement &amp;amp; Fluency, Explicit instruction and manipulations of attention &amp;amp; discrimination]&lt;br /&gt;
&lt;br /&gt;
=== Tell vs. elicit ===&lt;br /&gt;
When a tutor knows that something needs to be said, she or he must decide whether to &#039;&#039;tell&#039;&#039; it to the tutee, try to &#039;&#039;elicit&#039;&#039; it from the tutee via a question or prompt, or just &#039;&#039;wait&#039;&#039; and hope that the tutee says it.  Similarly, if a tutor knows that something needs to be done, the tutor can do it, elicit the action from the student or just wait.  An instructional designer faces the same choices.  For each thing that needs to be said or done in the instructional dialogue, should the tutor or the student be made responsible for it?  For instance, should the tutoring system point out errors to the students or should the students detect their errors?  In general, assistance is higher when the tutor does a portion of the instructional activity than when the student does it.&lt;br /&gt;
 &lt;br /&gt;
*[[Hausmann_Study|Does it matter who generates the explanations? (Hausmann &amp;amp; VanLehn, 2006)]]&lt;br /&gt;
&lt;br /&gt;
*[[Student_Uncertainty|Does Treating Student Uncertainty as a Learning Impasse Improve Learning in Spoken Dialogue Tutoring? (Forbes-Riley &amp;amp; Litman)]]&lt;br /&gt;
&lt;br /&gt;
*[[The_Help_Tutor__Roll_Aleven_McLaren|Tutoring a meta-cognitive skill: Help-seeking (Roll, Aleven &amp;amp; McLaren)]] [Also in the Refinement &amp;amp; Fluency cluster, and relevant to Knowledge Component analysis]&lt;br /&gt;
&lt;br /&gt;
*[[The self-correction of speech errors (McCormick, O’Neill &amp;amp; Siskin)]]&lt;br /&gt;
&lt;br /&gt;
*[[Using Elaborated Explanations to Support Geometry Learning (Aleven &amp;amp; Butcher)]]&lt;br /&gt;
&lt;br /&gt;
*[[Plateau_study|What is the optimal level of interaction during learning from problem solving? (Hausmann, van de Sande, &amp;amp; VanLehn, 2008)]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Cluster]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7842</id>
		<title>Reflective Dialogues (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7842"/>
		<updated>2008-04-15T21:24:23Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Connections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Do Reflective Dialogues that Explicitly Target the “What? How? and Why (not)?” Knowledge of Physics Problem Solving Promote Expert-like Planning Ability? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz, John Connelly, Donald Treacy&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 3/1/06&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 6/30/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 67&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 750 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
One of the main differences between experts and novices in physics is that experts are more adept at identifying relevant principles and generating solution plans before starting to solve a problem (Chi, Glaser &amp;amp; Rees, 1982; Dufresne, Gerace, Hardiman, &amp;amp; Mestre, 1992; Priest &amp;amp; Lindsay, 1992).   We propose that this difference may be due to two reasons: (1) in traditional physics courses, students are not explicitly asked to plan or are given scaffolding to support planning during problem solving, and (2) many students lack the basic knowledge of physics concepts, principles, and procedures that is prerequisite for effective planning.    &lt;br /&gt;
&lt;br /&gt;
In this project, we tested the effectiveness of engaging students in reflective dialogues after they solve problems in [[Andes]] that explicitly target the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why (or why not) to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  The main hypothesis tested is that explicit training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, will be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
   &lt;br /&gt;
To test this hypothesis, we conducted a Physics LearnLab study in which we compared the effectiveness of an experimental version of Andes that engaged students in reflective “what? how? and why?” dialogues after physics problem solving ([[explicit instruction]] condition) with a control version that gave students additional problem-solving practice after they solved an Andes problem—e.g., by having them identify bugs in a sample student solution to a problem ([[implicit instruction]] condition).  We predicted that students in the explicit condition would outperform students in the implicit condition with respect to gain score from pre-test to post-test and scores on course exams that targeted far transfer.  Consistent with prior research, the dialogues promoted conceptual understanding of physics. However, measures of retention and transfer to problem-solving ability showed only a marginal effect of completing the reflective dialogues.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge—i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them—enhance students’ problem-solving ability?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Because experts develop schemata largely through a good deal of practice with solving various types of problems, traditional introductory college physics courses take a similar approach.  They are structured so that students are introduced to concepts, principles, and procedures in their text and through lectures, which they are then asked to apply to many problem-solving exercises.  The hope is that, with extensive practice, students will integrate [[procedural]] and [[conceptual knowledge]] and develop expert-like schemata and planning skills.  Unfortunately, many students don’t.  In addition to exiting these courses with lingering naïve misconceptions (e.g., Halloun &amp;amp; Hestenes, 1985; McDermott, 1984), they continue to solve problems largely by manipulating equations until the desired quantity is isolated—that is, by means-end analysis—instead of by identifying relevant principles and generating solution plans before starting to solve a problem, as experts do (Dufresne et al., 1992; Larkin, McDermott, Simon, &amp;amp; Simon, 1980; Priest &amp;amp; Lindsay, 1992).   &lt;br /&gt;
&lt;br /&gt;
We refer to the traditional approach to physics instruction described in the preceding paragraph as [[implicit instruction]], because it encourages the inductive development of abstractions (concepts, principles, and schemata) through repeated exposure to instances, instead of by explicitly reifying these abstractions (O&#039;Malley &amp;amp; Chamot, 1994).  In response to the limitations of implicit approaches to physics instruction that neither ask students to plan nor scaffold them in doing it, several instructional scientists have proposed methods that explicitly engage students in planning exercises (Dufresne et al., 1992; Leonard, Dufresne, &amp;amp; Mestre, 1996; Mestre, Dufresne, Gerace, &amp;amp; Hardiman, 1993).  These methods have met with modest success when tested mainly using high-achieving students (B or above in an introductory college physics course).  We suggest that the main reason that many students, even high achievers, are unable to use explicit planning methods effectively is that they lack the basic concepts, principles, and procedures that are prerequisites for effective planning.  &lt;br /&gt;
&lt;br /&gt;
This project tested the effectiveness of engaging students in reflective dialogues after they solve problems in Andes (e.g., Gertner &amp;amp; VanLehn, 2000) which explicitly targeted the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  Explicit instruction on these knowledge components proved to be effective and more efficient than giving students repeated practice in solving different types of problems.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students within each course section were block-randomly assigned to one of two dialogue conditions: standard [[Knowledge Construction Dialogues]] (KCDs) or control with no KCDs.  Students in the control condition were assigned an additional five Andes problems to attempt to better equate time on task.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypothesis tested was that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Analyses of pre- and post-test scores were more encouraging than for last year&#039;s study ([[Post-practice reflection (Katz)]]).  After omitting scores from one student with a post-test duration of less than two minutes, we were left with treatment and control groups of equal size (&#039;&#039;n&#039;&#039; = 33). We also re-classified two treatment subjects who did no dialogues as control subjects (effective treatment and control &#039;&#039;n&#039;&#039;s = 31 and 35, respectively), although these subjects were not able to access the five extra control-group Andes problems. As we had hoped, ANOVA showed no significant differences between the effective treatment and control groups on pre-test score (respective &#039;&#039;M&#039;&#039;s = 12.10 and 11.77; &#039;&#039;F&#039;&#039; &amp;lt; 1), but treatment subjects had higher mean post-test scores (17.97 vs. 15.57; &#039;&#039;F&#039;&#039;(1, 64) = 4.89, &#039;&#039;p&#039;&#039; = .031), mean raw gain scores (5.87 vs. 3.80; &#039;&#039;F&#039;&#039; = 5.62, &#039;&#039;p&#039;&#039; = .021), and mean Estes gain scores (0.330 vs. 0.208; &#039;&#039;F&#039;&#039; = 6.74, &#039;&#039;p&#039;&#039; = .012) than did control subjects. In short, without regard to problem-solving practice, subjects who did KCDs did significantly better on the post-test than those who did not.&lt;br /&gt;
&lt;br /&gt;
Student participation in both homework problem solving and dialogues was much improved over last year, with 21 of 34 control subjects (62%) and 23 of 33 treatment subjects (70%) finishing at least 80% of the assigned target problems (of 31 for control, 26 for treatment) prior to the post-test and with 25 treatment subjects (76%) finishing at least 80% of the 26 associated KCDs. However, participation was&lt;br /&gt;
still far from perfect; 7 treatment subjects (21%) completed half or fewer of the assigned KCDs on time, and 2 of them completed no target problems or KCDs on time.  We again treated KCD completion and target problem completion as continuous independent variables in regression analyses. Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed&lt;br /&gt;
(&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 61) = 16.70, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score, QPA, and KCD completion were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .001, .05, &amp;amp; .05, respectively); problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;p&#039;&#039; = .54). Therefore, across all subjects it was KCD completion, as opposed to target homework problem completion, that significantly predicted post-test performance.&lt;br /&gt;
&lt;br /&gt;
To measure retention and transfer, we also analyzed scores from an hourly exam administered by one instructor to his two sections (&#039;&#039;n&#039;&#039;=47). All problems on this exam were quantitative and covered a subset of the units targeted during the intervention.  Scores on this exam ranged from 176 to 382 (of 400), and were highly correlated with&lt;br /&gt;
pre- and post-test scores; &#039;&#039;r&#039;&#039;s(45) = .54 and .71, &#039;&#039;p&#039;&#039;s &amp;lt; .0001 and .00001. However,&lt;br /&gt;
ANOVAs showed no differences between groups (Fs &amp;lt; 1), and regression of subscores on QPA, KCDs completed, and target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .54, &#039;&#039;F&#039;&#039;(3, 43) = 16.58,&lt;br /&gt;
&#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors but a significant effect of only QPA (&#039;&#039;p&#039;&#039; &amp;lt; .00001). Therefore, student performance on the hourly exam was not significantly affected by either KCD completion or target problem completion.&lt;br /&gt;
&lt;br /&gt;
In summary, student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed prior to the post-test. However, neither measure had a significant effect on student performance on an exam that focused on&lt;br /&gt;
problem-solving ability.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  More students took the learning-by doing path than in our prior study, perhaps because this year instructors &#039;&#039;required&#039;&#039; them to do so (i.e., this time there were negative consequences for avoiding them).&lt;br /&gt;
&lt;br /&gt;
Analyses of retention and transfer to problem-solving ability did not reveal a significant effect of the reflective dialogues. We had hoped that the capstone dialogues that treatment students completed before selected problems, at the end of each unit, would support problem-solving ability. It is possible that the intervention consisted of too few capstone dialogues (only five) to observe any effect.&lt;br /&gt;
&lt;br /&gt;
What seems to be the critical feature of post-practice reflection (PPR) in supporting conceptual understanding is the explicit instruction that it provides in domain knowledge. PPR may help students to fill in knowledge gaps, resolve misconceptions,&lt;br /&gt;
and abstract from the case at hand so that they are better prepared to engage in constructive activity (e.g., self-explanation) in future problems. Preliminary research comparing Andes (which encourages implicit learning of problem-solving strategies) with Pyrenees, a system that teaches problem-solving strategies explicitly, also&lt;br /&gt;
suggests an advantage for explicit instruction (VanLehn et al., 2004). Further research is needed to identify the mechanisms that drive learning from reflective dialogue, and to increase its potential to enhance problem-solving ability in addition to conceptual knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Full paper accepted at AIED 2007:&lt;br /&gt;
**Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007). Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes. In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Chi, M. T. H., Glaser, R., &amp;amp; Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), &#039;&#039;Advances in the Psychology of Human Intelligence, Vol. 1&#039;&#039; (pp. 7-75). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Dufresne, R. J., Gerace, P., Hardiman, T., &amp;amp; Mestre, J. P. (1992). Constraining novices to perform expertlike analyses: Effects on schema acquisition. &#039;&#039;The Journal of the Learning Sciences, 2&#039;&#039; (3), 307-331.&lt;br /&gt;
*Gertner, A. S., &amp;amp; VanLehn, K. (2000). Andes: A coached problem solving environment for physics. In G. Gauthier, C. Frasson, &amp;amp; K. VanLehn (Eds.), &#039;&#039;ITS 2000: Proceedings of the 5th International Conference on Intelligent Tutoring Systems&#039;&#039; (pp. 133-142). Berlin: Springer-Verlag.&lt;br /&gt;
*Larkin, J., McDermott, J., Simon, D. P., &amp;amp; Simon, H. A. (1980). Expert and novice performance in solving physics problems. &#039;&#039;Science, 208&#039;&#039;, 1335-1342.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*Mestre, J. P., Dufresne, R. J., Gerace, W. J., &amp;amp; Hardiman, P. T. (1993). Promoting skilled problem-solving behavior among beginning physics students. &#039;&#039;Journal of Research in Science Teaching, 30&#039;&#039;, 303-317.&lt;br /&gt;
*O’Malley, M., and Chamot, A. (1994). &#039;&#039;The CALLA Handbook&#039;&#039;. Reading, MA: Addison-Wesley.&lt;br /&gt;
*Priest, A. G., &amp;amp; Lindsay, R. O. (1992). New light on novice-expert differences in physics problem solving. &#039;&#039;British Journal of Psychology, 83&#039;&#039;, 389-405.&lt;br /&gt;
*VanLehn, K., Bhembe, D., Chi, M., Lynch, C., Schulze, K., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2004). Implicit versus explicit learning of strategies in a nonprocedural cognitive skill. In J. C. Lester, R. M. Vicari, &amp;amp; F. Paraguacu, (Eds.), &#039;&#039;Intelligent Tutoring Systems: 7th International Conference&#039;&#039; (pp. 521-530). Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)|Post-practice reflection (Katz &amp;amp; Connelly, 2006)]]&lt;br /&gt;
* [[Extending Reflective Dialogue Support (Katz &amp;amp; Connelly)|Extending Reflective Dialogue Support (Katz &amp;amp; Connelly, 2008)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans for January 2008 - December 2008:&lt;br /&gt;
* write journal article expanding AIED conference paper&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7841</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7841"/>
		<updated>2008-04-15T21:22:37Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Connections */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|Reflective Dialogues (Katz, Connelly &amp;amp; Treacy, 2007)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7840</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7840"/>
		<updated>2008-04-15T21:21:50Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and omission of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7839</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7839"/>
		<updated>2008-04-15T21:20:41Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and removal of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]]&lt;br /&gt;
* [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]]&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7838</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7838"/>
		<updated>2008-04-15T21:19:17Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufresne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly, &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Preliminary analyses show at least marginal support for a replication of last year&#039;s finding that student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed.  However, due to software glitches during data collection, low student participation (completing assigned homework and/or dialogues) in some course sections, and evidence of general &amp;quot;gaming the system&amp;quot; by some students, more detailed analyses are pending the identification and removal of noisy data from our overall corpus.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to Advisory Board, January 2008&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007).  Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes.  In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Lessons learned.  &#039;&#039;International Journal of Artificial Intelligence and Education&#039;&#039;, &#039;&#039;15&#039;&#039;(3).&lt;br /&gt;
*VanLehn, K., Lynch, C., Schmulze, K., Shapiro, J.A., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2005).  The Andes physics tutoring system: Five years of evaluation.  In G. McCalla, C.K. Looi, B. Bredeweg, &amp;amp; J. Breuker (Eds.), &#039;&#039;Artificial Intelligence in Education&#039;&#039; (pp. 678-685).  Amsterdam, Netherlands: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]] &lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans through September 2008:&lt;br /&gt;
* continue to try extracting viable data&lt;br /&gt;
* write journal article reporting updated findings&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7837</id>
		<title>Reflective Dialogues (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7837"/>
		<updated>2008-04-15T21:07:36Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Further Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Do Reflective Dialogues that Explicitly Target the “What? How? and Why (not)?” Knowledge of Physics Problem Solving Promote Expert-like Planning Ability? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz, John Connelly, Donald Treacy&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 3/1/06&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 6/30/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 67&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 750 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
One of the main differences between experts and novices in physics is that experts are more adept at identifying relevant principles and generating solution plans before starting to solve a problem (Chi, Glaser &amp;amp; Rees, 1982; Dufresne, Gerace, Hardiman, &amp;amp; Mestre, 1992; Priest &amp;amp; Lindsay, 1992).   We propose that this difference may be due to two reasons: (1) in traditional physics courses, students are not explicitly asked to plan or are given scaffolding to support planning during problem solving, and (2) many students lack the basic knowledge of physics concepts, principles, and procedures that is prerequisite for effective planning.    &lt;br /&gt;
&lt;br /&gt;
In this project, we tested the effectiveness of engaging students in reflective dialogues after they solve problems in [[Andes]] that explicitly target the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why (or why not) to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  The main hypothesis tested is that explicit training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, will be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
   &lt;br /&gt;
To test this hypothesis, we conducted a Physics LearnLab study in which we compared the effectiveness of an experimental version of Andes that engaged students in reflective “what? how? and why?” dialogues after physics problem solving ([[explicit instruction]] condition) with a control version that gave students additional problem-solving practice after they solved an Andes problem—e.g., by having them identify bugs in a sample student solution to a problem ([[implicit instruction]] condition).  We predicted that students in the explicit condition would outperform students in the implicit condition with respect to gain score from pre-test to post-test and scores on course exams that targeted far transfer.  Consistent with prior research, the dialogues promoted conceptual understanding of physics. However, measures of retention and transfer to problem-solving ability showed only a marginal effect of completing the reflective dialogues.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge—i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them—enhance students’ problem-solving ability?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Because experts develop schemata largely through a good deal of practice with solving various types of problems, traditional introductory college physics courses take a similar approach.  They are structured so that students are introduced to concepts, principles, and procedures in their text and through lectures, which they are then asked to apply to many problem-solving exercises.  The hope is that, with extensive practice, students will integrate [[procedural]] and [[conceptual knowledge]] and develop expert-like schemata and planning skills.  Unfortunately, many students don’t.  In addition to exiting these courses with lingering naïve misconceptions (e.g., Halloun &amp;amp; Hestenes, 1985; McDermott, 1984), they continue to solve problems largely by manipulating equations until the desired quantity is isolated—that is, by means-end analysis—instead of by identifying relevant principles and generating solution plans before starting to solve a problem, as experts do (Dufresne et al., 1992; Larkin, McDermott, Simon, &amp;amp; Simon, 1980; Priest &amp;amp; Lindsay, 1992).   &lt;br /&gt;
&lt;br /&gt;
We refer to the traditional approach to physics instruction described in the preceding paragraph as [[implicit instruction]], because it encourages the inductive development of abstractions (concepts, principles, and schemata) through repeated exposure to instances, instead of by explicitly reifying these abstractions (O&#039;Malley &amp;amp; Chamot, 1994).  In response to the limitations of implicit approaches to physics instruction that neither ask students to plan nor scaffold them in doing it, several instructional scientists have proposed methods that explicitly engage students in planning exercises (Dufresne et al., 1992; Leonard, Dufresne, &amp;amp; Mestre, 1996; Mestre, Dufresne, Gerace, &amp;amp; Hardiman, 1993).  These methods have met with modest success when tested mainly using high-achieving students (B or above in an introductory college physics course).  We suggest that the main reason that many students, even high achievers, are unable to use explicit planning methods effectively is that they lack the basic concepts, principles, and procedures that are prerequisites for effective planning.  &lt;br /&gt;
&lt;br /&gt;
This project tested the effectiveness of engaging students in reflective dialogues after they solve problems in Andes (e.g., Gertner &amp;amp; VanLehn, 2000) which explicitly targeted the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  Explicit instruction on these knowledge components proved to be effective and more efficient than giving students repeated practice in solving different types of problems.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students within each course section were block-randomly assigned to one of two dialogue conditions: standard [[Knowledge Construction Dialogues]] (KCDs) or control with no KCDs.  Students in the control condition were assigned an additional five Andes problems to attempt to better equate time on task.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypothesis tested was that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Analyses of pre- and post-test scores were more encouraging than for last year&#039;s study ([[Post-practice reflection (Katz)]]).  After omitting scores from one student with a post-test duration of less than two minutes, we were left with treatment and control groups of equal size (&#039;&#039;n&#039;&#039; = 33). We also re-classified two treatment subjects who did no dialogues as control subjects (effective treatment and control &#039;&#039;n&#039;&#039;s = 31 and 35, respectively), although these subjects were not able to access the five extra control-group Andes problems. As we had hoped, ANOVA showed no significant differences between the effective treatment and control groups on pre-test score (respective &#039;&#039;M&#039;&#039;s = 12.10 and 11.77; &#039;&#039;F&#039;&#039; &amp;lt; 1), but treatment subjects had higher mean post-test scores (17.97 vs. 15.57; &#039;&#039;F&#039;&#039;(1, 64) = 4.89, &#039;&#039;p&#039;&#039; = .031), mean raw gain scores (5.87 vs. 3.80; &#039;&#039;F&#039;&#039; = 5.62, &#039;&#039;p&#039;&#039; = .021), and mean Estes gain scores (0.330 vs. 0.208; &#039;&#039;F&#039;&#039; = 6.74, &#039;&#039;p&#039;&#039; = .012) than did control subjects. In short, without regard to problem-solving practice, subjects who did KCDs did significantly better on the post-test than those who did not.&lt;br /&gt;
&lt;br /&gt;
Student participation in both homework problem solving and dialogues was much improved over last year, with 21 of 34 control subjects (62%) and 23 of 33 treatment subjects (70%) finishing at least 80% of the assigned target problems (of 31 for control, 26 for treatment) prior to the post-test and with 25 treatment subjects (76%) finishing at least 80% of the 26 associated KCDs. However, participation was&lt;br /&gt;
still far from perfect; 7 treatment subjects (21%) completed half or fewer of the assigned KCDs on time, and 2 of them completed no target problems or KCDs on time.  We again treated KCD completion and target problem completion as continuous independent variables in regression analyses. Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed&lt;br /&gt;
(&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 61) = 16.70, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score, QPA, and KCD completion were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .001, .05, &amp;amp; .05, respectively); problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;p&#039;&#039; = .54). Therefore, across all subjects it was KCD completion, as opposed to target homework problem completion, that significantly predicted post-test performance.&lt;br /&gt;
&lt;br /&gt;
To measure retention and transfer, we also analyzed scores from an hourly exam administered by one instructor to his two sections (&#039;&#039;n&#039;&#039;=47). All problems on this exam were quantitative and covered a subset of the units targeted during the intervention.  Scores on this exam ranged from 176 to 382 (of 400), and were highly correlated with&lt;br /&gt;
pre- and post-test scores; &#039;&#039;r&#039;&#039;s(45) = .54 and .71, &#039;&#039;p&#039;&#039;s &amp;lt; .0001 and .00001. However,&lt;br /&gt;
ANOVAs showed no differences between groups (Fs &amp;lt; 1), and regression of subscores on QPA, KCDs completed, and target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .54, &#039;&#039;F&#039;&#039;(3, 43) = 16.58,&lt;br /&gt;
&#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors but a significant effect of only QPA (&#039;&#039;p&#039;&#039; &amp;lt; .00001). Therefore, student performance on the hourly exam was not significantly affected by either KCD completion or target problem completion.&lt;br /&gt;
&lt;br /&gt;
In summary, student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed prior to the post-test. However, neither measure had a significant effect on student performance on an exam that focused on&lt;br /&gt;
problem-solving ability.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  More students took the learning-by doing path than in our prior study, perhaps because this year instructors &#039;&#039;required&#039;&#039; them to do so (i.e., this time there were negative consequences for avoiding them).&lt;br /&gt;
&lt;br /&gt;
Analyses of retention and transfer to problem-solving ability did not reveal a significant effect of the reflective dialogues. We had hoped that the capstone dialogues that treatment students completed before selected problems, at the end of each unit, would support problem-solving ability. It is possible that the intervention consisted of too few capstone dialogues (only five) to observe any effect.&lt;br /&gt;
&lt;br /&gt;
What seems to be the critical feature of post-practice reflection (PPR) in supporting conceptual understanding is the explicit instruction that it provides in domain knowledge. PPR may help students to fill in knowledge gaps, resolve misconceptions,&lt;br /&gt;
and abstract from the case at hand so that they are better prepared to engage in constructive activity (e.g., self-explanation) in future problems. Preliminary research comparing Andes (which encourages implicit learning of problem-solving strategies) with Pyrenees, a system that teaches problem-solving strategies explicitly, also&lt;br /&gt;
suggests an advantage for explicit instruction (VanLehn et al., 2004). Further research is needed to identify the mechanisms that drive learning from reflective dialogue, and to increase its potential to enhance problem-solving ability in addition to conceptual knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Full paper accepted at AIED 2007:&lt;br /&gt;
**Katz, S., Connelly, J., &amp;amp; Wilson, C. (2007). Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes. In R. Luckin, K. R. Koedinger, &amp;amp; J. Greer (Eds.), &#039;&#039;Artificial Intelligence in Education: Building Technology Rich Learning Contexts that Work&#039;&#039; (pp. 425-432). Amsterdam: IOS Press.&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Chi, M. T. H., Glaser, R., &amp;amp; Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), &#039;&#039;Advances in the Psychology of Human Intelligence, Vol. 1&#039;&#039; (pp. 7-75). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Dufresne, R. J., Gerace, P., Hardiman, T., &amp;amp; Mestre, J. P. (1992). Constraining novices to perform expertlike analyses: Effects on schema acquisition. &#039;&#039;The Journal of the Learning Sciences, 2&#039;&#039; (3), 307-331.&lt;br /&gt;
*Gertner, A. S., &amp;amp; VanLehn, K. (2000). Andes: A coached problem solving environment for physics. In G. Gauthier, C. Frasson, &amp;amp; K. VanLehn (Eds.), &#039;&#039;ITS 2000: Proceedings of the 5th International Conference on Intelligent Tutoring Systems&#039;&#039; (pp. 133-142). Berlin: Springer-Verlag.&lt;br /&gt;
*Larkin, J., McDermott, J., Simon, D. P., &amp;amp; Simon, H. A. (1980). Expert and novice performance in solving physics problems. &#039;&#039;Science, 208&#039;&#039;, 1335-1342.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*Mestre, J. P., Dufresne, R. J., Gerace, W. J., &amp;amp; Hardiman, P. T. (1993). Promoting skilled problem-solving behavior among beginning physics students. &#039;&#039;Journal of Research in Science Teaching, 30&#039;&#039;, 303-317.&lt;br /&gt;
*O’Malley, M., and Chamot, A. (1994). &#039;&#039;The CALLA Handbook&#039;&#039;. Reading, MA: Addison-Wesley.&lt;br /&gt;
*Priest, A. G., &amp;amp; Lindsay, R. O. (1992). New light on novice-expert differences in physics problem solving. &#039;&#039;British Journal of Psychology, 83&#039;&#039;, 389-405.&lt;br /&gt;
*VanLehn, K., Bhembe, D., Chi, M., Lynch, C., Schulze, K., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2004). Implicit versus explicit learning of strategies in a nonprocedural cognitive skill. In J. C. Lester, R. M. Vicari, &amp;amp; F. Paraguacu, (Eds.), &#039;&#039;Intelligent Tutoring Systems: 7th International Conference&#039;&#039; (pp. 521-530). Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]] &lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans for January 2008 - December 2008:&lt;br /&gt;
* write journal article expanding AIED conference paper&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7836</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7836"/>
		<updated>2008-04-15T20:49:16Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufesne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major category&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypotheses tested were (a) that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems (replicating last year&#039;s study); and (b) that extended dialogues with additional quantitative practice in applying these [[knowledge components]] in different problem-solving contexts would better foster [[transfer]] and [[robust learning]].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7835</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7835"/>
		<updated>2008-04-15T20:40:43Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufesne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
Students within each course section were block-randomly assigned to one of three dialogue conditions in which the usage of [[Knowledge Construction Dialogues]] (KCDs) differed.  The short-KCD condition approximated the treatment condition from last year&#039;s study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], in which mostly conceptual dialogues followed problem solving on selected [[Andes]] problems.  The new long-KCD condition appended more quantitative practice and [[transfer]] content (via additional &#039;&#039;what if&#039;&#039; scenarios) to the short KCDs; students in this condition were assigned five fewer Andes problems than those in the short-KCD condition to attempt to equate time on task.  The control condition was identical to last year&#039;s; students saw no KCDs and were assigned five more Andes problems than the short-KCD students.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7834</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7834"/>
		<updated>2008-04-15T20:17:52Z</updated>

		<summary type="html">&lt;p&gt;Connelly: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability, while other students show at least a glimmer of understanding of basic physics concepts and principles but are unable to use this knowledge to solve quantitative problems.  The present research seeks to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge via [[post-practice reflection]] dialogues that guide students in learning and practicing the concepts and principles associated with a just-solved physics problem.  It builds upon our 2006 LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]] by trying to better support [[robust learning]] via a third condition in which students work through longer dialogues designed to foster [[transfer]] by specifically tying together qualitative and quantitative knowledge in different contexts.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge (i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them), combined with quantitative practice in applying these [[knowledge components]] in different contexts, enhance students’ problem-solving ability more than additional problem solving and better foster [[transfer]] and [[robust learning]]?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufesne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7832</id>
		<title>Reflective Dialogues (Katz)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Reflective_Dialogues_(Katz)&amp;diff=7832"/>
		<updated>2008-04-15T19:38:01Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Background and Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Do Reflective Dialogues that Explicitly Target the “What? How? and Why (not)?” Knowledge of Physics Problem Solving Promote Expert-like Planning Ability? ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz, John Connelly, Donald Treacy&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 3/1/06&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 6/30/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 67&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 750 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
One of the main differences between experts and novices in physics is that experts are more adept at identifying relevant principles and generating solution plans before starting to solve a problem (Chi, Glaser &amp;amp; Rees, 1982; Dufresne, Gerace, Hardiman, &amp;amp; Mestre, 1992; Priest &amp;amp; Lindsay, 1992).   We propose that this difference may be due to two reasons: (1) in traditional physics courses, students are not explicitly asked to plan or are given scaffolding to support planning during problem solving, and (2) many students lack the basic knowledge of physics concepts, principles, and procedures that is prerequisite for effective planning.    &lt;br /&gt;
&lt;br /&gt;
In this project, we tested the effectiveness of engaging students in reflective dialogues after they solve problems in [[Andes]] that explicitly target the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why (or why not) to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  The main hypothesis tested is that explicit training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, will be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
   &lt;br /&gt;
To test this hypothesis, we conducted a Physics LearnLab study in which we compared the effectiveness of an experimental version of Andes that engaged students in reflective “what? how? and why?” dialogues after physics problem solving ([[explicit instruction]] condition) with a control version that gave students additional problem-solving practice after they solved an Andes problem—e.g., by having them identify bugs in a sample student solution to a problem ([[implicit instruction]] condition).  We predicted that students in the explicit condition would outperform students in the implicit condition with respect to gain score from pre-test to post-test and scores on course exams that targeted far transfer.  Consistent with prior research, the dialogues promoted conceptual understanding of physics. However, measures of retention and transfer to problem-solving ability showed only a marginal effect of completing the reflective dialogues.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
Does explicit training in the three main components of problem-solving knowledge—i.e., knowledge about what principles apply to a problem, how to apply these principles, and why to apply them—enhance students’ problem-solving ability?&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
Because experts develop schemata largely through a good deal of practice with solving various types of problems, traditional introductory college physics courses take a similar approach.  They are structured so that students are introduced to concepts, principles, and procedures in their text and through lectures, which they are then asked to apply to many problem-solving exercises.  The hope is that, with extensive practice, students will integrate [[procedural]] and [[conceptual knowledge]] and develop expert-like schemata and planning skills.  Unfortunately, many students don’t.  In addition to exiting these courses with lingering naïve misconceptions (e.g., Halloun &amp;amp; Hestenes, 1985; McDermott, 1984), they continue to solve problems largely by manipulating equations until the desired quantity is isolated—that is, by means-end analysis—instead of by identifying relevant principles and generating solution plans before starting to solve a problem, as experts do (Dufresne et al., 1992; Larkin, McDermott, Simon, &amp;amp; Simon, 1980; Priest &amp;amp; Lindsay, 1992).   &lt;br /&gt;
&lt;br /&gt;
We refer to the traditional approach to physics instruction described in the preceding paragraph as [[implicit instruction]], because it encourages the inductive development of abstractions (concepts, principles, and schemata) through repeated exposure to instances, instead of by explicitly reifying these abstractions (O&#039;Malley &amp;amp; Chamot, 1994).  In response to the limitations of implicit approaches to physics instruction that neither ask students to plan nor scaffold them in doing it, several instructional scientists have proposed methods that explicitly engage students in planning exercises (Dufresne et al., 1992; Leonard, Dufresne, &amp;amp; Mestre, 1996; Mestre, Dufresne, Gerace, &amp;amp; Hardiman, 1993).  These methods have met with modest success when tested mainly using high-achieving students (B or above in an introductory college physics course).  We suggest that the main reason that many students, even high achievers, are unable to use explicit planning methods effectively is that they lack the basic concepts, principles, and procedures that are prerequisites for effective planning.  &lt;br /&gt;
&lt;br /&gt;
This project tested the effectiveness of engaging students in reflective dialogues after they solve problems in Andes (e.g., Gertner &amp;amp; VanLehn, 2000) which explicitly targeted the three main types of knowledge that experts employ during planning: knowledge about what principle(s) to apply to a given problem, how to apply these principles (e.g., what equations to use), and why to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).  Explicit instruction on these knowledge components proved to be effective and more efficient than giving students repeated practice in solving different types of problems.&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
* &#039;&#039;Gains in qualitative and quantitative knowledge&#039;&#039;.  Post-test score,and pre-test to post-test gain scores, on near and far [[transfer]] items.&lt;br /&gt;
* &#039;&#039;Short-term retention&#039;&#039;. Performance on course exams that cover target topics (e.g., work and energy, translational dynamics, rotation, momentum)&lt;br /&gt;
* &#039;&#039;[[Long-term retention]]&#039;&#039;.  Performance on final exam, taken several weeks after the intervention.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Students within each course section were block-randomly assigned to one of two dialogue conditions: standard [[Knowledge Construction Dialogues]] (KCDs) or control with no KCDs.  Students in the control condition were assigned an additional five Andes problems to attempt to better equate time on task.&lt;br /&gt;
&lt;br /&gt;
The following variables were entered into a regression analysis, with post-test score as the dependent variable:&lt;br /&gt;
&lt;br /&gt;
* Number of problems completed before the post-test was administered&lt;br /&gt;
* Number of dialogues that the student completed&lt;br /&gt;
* Grade point average (CQPR)&lt;br /&gt;
* College major&lt;br /&gt;
* Pre-test score&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
The main hypothesis tested was that explicit, part-task training on the “what? how? and why?” components of planning, via strategically staged reflective dialogues, would be more effective and efficient than the traditional approach of letting students acquire these components implicitly, through lots of practice with solving problems.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
Analyses of pre- and post-test scores were more encouraging than for last year&#039;s study ([[Post-practice reflection (Katz)]]).  After omitting scores from one student with a post-test duration of less than two minutes, we were left with treatment and control groups of equal size (&#039;&#039;n&#039;&#039; = 33). We also re-classified two treatment subjects who did no dialogues as control subjects (effective treatment and control &#039;&#039;n&#039;&#039;s = 31 and 35, respectively), although these subjects were not able to access the five extra control-group Andes problems. As we had hoped, ANOVA showed no significant differences between the effective treatment and control groups on pre-test score (respective &#039;&#039;M&#039;&#039;s = 12.10 and 11.77; &#039;&#039;F&#039;&#039; &amp;lt; 1), but treatment subjects had higher mean post-test scores (17.97 vs. 15.57; &#039;&#039;F&#039;&#039;(1, 64) = 4.89, &#039;&#039;p&#039;&#039; = .031), mean raw gain scores (5.87 vs. 3.80; &#039;&#039;F&#039;&#039; = 5.62, &#039;&#039;p&#039;&#039; = .021), and mean Estes gain scores (0.330 vs. 0.208; &#039;&#039;F&#039;&#039; = 6.74, &#039;&#039;p&#039;&#039; = .012) than did control subjects. In short, without regard to problem-solving practice, subjects who did KCDs did significantly better on the post-test than those who did not.&lt;br /&gt;
&lt;br /&gt;
Student participation in both homework problem solving and dialogues was much improved over last year, with 21 of 34 control subjects (62%) and 23 of 33 treatment subjects (70%) finishing at least 80% of the assigned target problems (of 31 for control, 26 for treatment) prior to the post-test and with 25 treatment subjects (76%) finishing at least 80% of the 26 associated KCDs. However, participation was&lt;br /&gt;
still far from perfect; 7 treatment subjects (21%) completed half or fewer of the assigned KCDs on time, and 2 of them completed no target problems or KCDs on time.  We again treated KCD completion and target problem completion as continuous independent variables in regression analyses. Regressing post-test score on pre-test score, QPA, number of KCDs completed, and number of target problems completed&lt;br /&gt;
(&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .52, &#039;&#039;F&#039;&#039;(4, 61) = 16.70, &#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors, but only pre-test score, QPA, and KCD completion were statistically significant (&#039;&#039;p&#039;&#039;s &amp;lt; .001, .05, &amp;amp; .05, respectively); problem completion was &#039;&#039;ns&#039;&#039; (&#039;&#039;p&#039;&#039; = .54). Therefore, across all subjects it was KCD completion, as opposed to target homework problem completion, that significantly predicted post-test performance.&lt;br /&gt;
&lt;br /&gt;
To measure retention and transfer, we also analyzed scores from an hourly exam administered by one instructor to his two sections (&#039;&#039;n&#039;&#039;=47). All problems on this exam were quantitative and covered a subset of the units targeted during the intervention.  Scores on this exam ranged from 176 to 382 (of 400), and were highly correlated with&lt;br /&gt;
pre- and post-test scores; &#039;&#039;r&#039;&#039;s(45) = .54 and .71, &#039;&#039;p&#039;&#039;s &amp;lt; .0001 and .00001. However,&lt;br /&gt;
ANOVAs showed no differences between groups (Fs &amp;lt; 1), and regression of subscores on QPA, KCDs completed, and target problems completed (&#039;&#039;R&#039;&#039;&amp;lt;SUP&amp;gt;2&amp;lt;/SUP&amp;gt; = .54, &#039;&#039;F&#039;&#039;(3, 43) = 16.58,&lt;br /&gt;
&#039;&#039;p&#039;&#039; &amp;lt; .00001) showed positive contributions of all factors but a significant effect of only QPA (&#039;&#039;p&#039;&#039; &amp;lt; .00001). Therefore, student performance on the hourly exam was not significantly affected by either KCD completion or target problem completion.&lt;br /&gt;
&lt;br /&gt;
In summary, student performance on the post-test relative to the pre-test was significantly influenced by the number of dialogues they completed, as opposed to the number of target problems they completed prior to the post-test. However, neither measure had a significant effect on student performance on an exam that focused on&lt;br /&gt;
problem-solving ability.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
This study is part of the Interactive Communication cluster, and its hypothesis is a specialization of the IC cluster’s central hypothesis.  The IC cluster’s central hypothesis is that robust learning occurs when two conditions are met:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;The learning event space should have paths that are mostly learning-by-doing along with alternative paths where a second agent does most of the work&#039;&#039;.  Since this experiment did not deal with collaboration between agents, it did not test this condition.  It did, however, show that the more that students engage in one form of learning-by-doing—mainly, post-practice reflection—the more they learn.&lt;br /&gt;
* &#039;&#039;The student takes the learning-by-doing path unless it becomes too difficult&#039;&#039;.  More students took the learning-by doing path than in our prior study, perhaps because this year instructors &#039;&#039;required&#039;&#039; them to do so (i.e., this time there were negative consequences for avoiding them).&lt;br /&gt;
&lt;br /&gt;
Analyses of retention and transfer to problem-solving ability did not reveal a significant effect of the reflective dialogues. We had hoped that the capstone dialogues that treatment students completed before selected problems, at the end of each unit, would support problem-solving ability. It is possible that the intervention consisted of too few capstone dialogues (only five) to observe any effect.&lt;br /&gt;
&lt;br /&gt;
What seems to be the critical feature of post-practice reflection (PPR) in supporting conceptual understanding is the explicit instruction that it provides in domain knowledge. PPR may help students to fill in knowledge gaps, resolve misconceptions,&lt;br /&gt;
and abstract from the case at hand so that they are better prepared to engage in constructive activity (e.g., self-explanation) in future problems. Preliminary research comparing Andes (which encourages implicit learning of problem-solving strategies) with Pyrenees, a system that teaches problem-solving strategies explicitly, also&lt;br /&gt;
suggests an advantage for explicit instruction (VanLehn et al., 2004). Further research is needed to identify the mechanisms that drive learning from reflective dialogue, and to increase its potential to enhance problem-solving ability in addition to conceptual knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
==== Annotated bibliography ====&lt;br /&gt;
* Presentation to site visitors, 2005&lt;br /&gt;
* Full paper accepted at AIED 2007:&lt;br /&gt;
**Katz, S., Connelly, J., &amp;amp; Wilson, C. (in press). Out of the lab and into the classroom: An evaluation of reflective dialogue in Andes. &#039;&#039;Proceedings of AIED07&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
*Chi, M. T. H., Glaser, R., &amp;amp; Rees, E. (1982). Expertise in problem solving. In R. J. Sternberg (Ed.), &#039;&#039;Advances in the Psychology of Human Intelligence, Vol. 1&#039;&#039; (pp. 7-75). Hillsdale, NJ: Erlbaum.&lt;br /&gt;
*Dufresne, R. J., Gerace, P., Hardiman, T., &amp;amp; Mestre, J. P. (1992). Constraining novices to perform expertlike analyses: Effects on schema acquisition. &#039;&#039;The Journal of the Learning Sciences, 2&#039;&#039; (3), 307-331.&lt;br /&gt;
*Gertner, A. S., &amp;amp; VanLehn, K. (2000). Andes: A coached problem solving environment for physics. In G. Gauthier, C. Frasson, &amp;amp; K. VanLehn (Eds.), &#039;&#039;ITS 2000: Proceedings of the 5th International Conference on Intelligent Tutoring Systems&#039;&#039; (pp. 133-142). Berlin: Springer-Verlag.&lt;br /&gt;
*Larkin, J., McDermott, J., Simon, D. P., &amp;amp; Simon, H. A. (1980). Expert and novice performance in solving physics problems. &#039;&#039;Science, 208&#039;&#039;, 1335-1342.&lt;br /&gt;
*Leonard, W. J., Dufresne, R. J., &amp;amp; Mestre, J. P. (1996). Using qualitative problem-solving strategies to highlight the role of conceptual knowledge in solving problems. &#039;&#039;American Journal of Physics, 64&#039;&#039; (12), 1495-1503.&lt;br /&gt;
*Mestre, J. P., Dufresne, R. J., Gerace, W. J., &amp;amp; Hardiman, P. T. (1993). Promoting skilled problem-solving behavior among beginning physics students. &#039;&#039;Journal of Research in Science Teaching, 30&#039;&#039;, 303-317.&lt;br /&gt;
*O’Malley, M., and Chamot, A. (1994). &#039;&#039;The CALLA Handbook&#039;&#039;. Reading, MA: Addison-Wesley.&lt;br /&gt;
*Priest, A. G., &amp;amp; Lindsay, R. O. (1992). New light on novice-expert differences in physics problem solving. &#039;&#039;British Journal of Psychology, 83&#039;&#039;, 389-405.&lt;br /&gt;
*VanLehn, K., Bhembe, D., Chi, M., Lynch, C., Schulze, K., Shelby, R., Taylor, L., Treacy, D., Weinstein, A., &amp;amp; Wintersgill, M. (2004). Implicit versus explicit learning of strategies in a nonprocedural cognitive skill. In J. C. Lester, R. M. Vicari, &amp;amp; F. Paraguacu, (Eds.), &#039;&#039;Intelligent Tutoring Systems: 7th International Conference&#039;&#039; (pp. 521-530). Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
==== Connections ====&lt;br /&gt;
This project shares features with the following research projects:&lt;br /&gt;
&lt;br /&gt;
Use of Questions during learning&lt;br /&gt;
* [[Post-practice reflection (Katz)]] &lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[FrenchCulture | FrenchCulture (Amy Ogan, Christopher Jones, Vincent Aleven)]]&lt;br /&gt;
* [[Rummel_Scripted_Collaborative_Problem_Solving|Collaborative Extensions to the Cognitive Tutor Algebra: Scripted Collaborative Problem Solving (Rummel, Diziol, McLaren, &amp;amp; Spada)]]&lt;br /&gt;
&lt;br /&gt;
Self explanations during learning&lt;br /&gt;
* [[Craig_questions|Deep-level questions during example studying (Craig &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study2 | The Effects of Interaction on Robust Learning (Hausmann &amp;amp; Chi)]]&lt;br /&gt;
* [[Hausmann Study | A comparison of self-explanation to instructional explanation (Hausmann &amp;amp; Vanlehn)]]&lt;br /&gt;
&lt;br /&gt;
====  Future plans ====&lt;br /&gt;
Our future plans for January 2008 - December 2008:&lt;br /&gt;
* write journal article expanding AIED conference paper&lt;br /&gt;
&lt;br /&gt;
[[Category:Study]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7772</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7772"/>
		<updated>2008-04-10T21:05:11Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufesne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The extended dialogue condition we implemented differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7771</id>
		<title>Extending Reflective Dialogue Support (Katz &amp; Connelly)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Extending_Reflective_Dialogue_Support_(Katz_%26_Connelly)&amp;diff=7771"/>
		<updated>2008-04-10T20:54:46Z</updated>

		<summary type="html">&lt;p&gt;Connelly: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Extending Automated Dialogue Support for Robust Learning of Physics ==&lt;br /&gt;
 Sandra Katz&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Sandra Katz &amp;amp; John Connelly&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || 10/1/07&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || 9/30/08&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || USNA&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || General Physics I&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = 75&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || approx. 125 hrs&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
Research on student understanding and problem-solving ability in first-year college physics courses shows that instructors deal with a double-edged sword.  Some students become adept at solving quantitative problems but do poorly on tests of conceptual knowledge and qualitative problem-solving ability.  Other students display the reverse problem: they show at least a glimmer of understanding of basic physics concepts and principles, but are unable to use this knowledge to solve quantitative problems.   Still other students master neither qualitative nor quantitative understanding of physics; very few master both.  Thus, the instructional challenge motivating this project is to find effective pedagogical strategies to &#039;&#039;integrate&#039;&#039; quantitative and qualitative knowledge.   Our scientific goal is to determine whether explicit and implicit learning can be effectively combined via post-practice dialogues that guide students in reflecting on the concepts and principles associated with a just-solved physics problem.  The main hypothesis tested is that, in the context of tutored problem solving, &#039;&#039;integrative reflective dialogues&#039;&#039; that explicitly tie qualitative knowledge to quantitative knowledge can improve quantitative problem-solving ability and retention of qualitative knowledge better than problem-solving practice (implicit learning) alone.  &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we conducted an experiment in the PSLC Physics LearnLab at the US Naval Academy in sections that use the [[Andes]] physics tutoring system (VanLehn et al., 2005a, 2005b).  We compared students who were randomly assigned to one of three conditions on measures of qualitative and quantitative problem-solving performance.  The two treatment conditions engaged in automated reflective dialogues after solving quantitative physics problems, while the control condition solved the same set of problems (plus a few additional problems to balance time on task) without any reflective dialogues, using the standard version of Andes.  In one treatment condition, the reflective dialogues individually targeted the three main types of knowledge that experts employ during problem solving, according to Leonard, Dufesne, &amp;amp; Mestre (1996): knowledge about &#039;&#039;what&#039;&#039; principle(s) to apply to a given problem, &#039;&#039;how&#039;&#039; to apply these principles (e.g., what equations to use), and &#039;&#039;why&#039;&#039; to apply them—that is, what the applicability conditions are (Leonard, Dufresne, &amp;amp; Mestre, 1996).   In a prior LearnLab study [[Reflective Dialogues (Katz)|(Katz, Connelly &amp;amp; Treacy)]], this intervention significantly improved students’ qualitative understanding of basic mechanics, as measured by pre-test to post-test gain scores.  However, students did not outperform standard Andes users on more [[robust learning]] measures of [[transfer]] (e.g., performance on quantitative course exams) and on a measure of retention of qualitative problem-solving ability (Katz, Connelly, &amp;amp; Wilson, 2007).  The alternative dialogue condition we evaluated differs from the other dialogue condition in three main ways:  (1) reflective dialogues contained more problem variations (&#039;&#039;what if&#039;&#039; scenarios), designed to support both qualitative and quantitative knowledge (most of our previous &#039;&#039;what if&#039;&#039; scenarios were qualitative only); (2) these “what if” scenarios were tied to the corresponding Andes problem-solving context &#039;&#039;&#039;and&#039;&#039;&#039; to new contexts, to help support near and far [[transfer]]; and (3) students were prompted to state the rules ([[knowledge components]]) applied to solving the problem variations, in order to promote a principle-based approach to learning, and they were given feedback that makes these rules explicit.  Our goal is to determine whether reflective dialogues that make the links between qualitative and quantitative physics knowledge explicit are more effective than both our previous dialogues and an implicit learning condition that is based on problem-solving practice alone.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
* [[Post-practice reflection]]&lt;br /&gt;
* [[Reflection questions]]&lt;br /&gt;
* [[Knowledge Construction Dialogues]] (KCDs)&lt;br /&gt;
* [[Transfer]]&lt;br /&gt;
&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Connelly</name></author>
	</entry>
</feed>