<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Idoroll</id>
	<title>Theory Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Idoroll"/>
	<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Special:Contributions/Idoroll"/>
	<updated>2026-05-01T14:14:12Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Productive_Failure_in_a_Chemistry_Virtual_Lab&amp;diff=10530</id>
		<title>Roll - Productive Failure in a Chemistry Virtual Lab</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Productive_Failure_in_a_Chemistry_Virtual_Lab&amp;diff=10530"/>
		<updated>2010-02-02T01:50:26Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Productive Failure in a Chemistry Virtual Laboratory (Roll)&lt;br /&gt;
==Summary Table==&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll, David Yaron&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; || Michael Karabinos (Instructional Designer, Carnegie Mellon), Sophia Nussbaum (Course Instructor and Instructional Designer, University of British Columbia)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || March, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || March, 2011&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || University of British Columbia, Freshman Laboratory Course&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Chemistry&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || 1100&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || 3300&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || Log files of student interactions with virtual lab and other instructional materials.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
This study is occuring in a set of online materials that students complete in preparation for a physical laboratory experience involving the design of a buffer solution. At the beginning of the online activities, students complete a set of activities in the ChemCollective virtual laboratory. In past years, these virtual lab activities have following a direct instruction approach, in which students add acid to a buffered versus unbuffered solution and compare the effects. This study adds an additional condition in which students are asked to create their own buffer solution, with only minimal guidance. While not much guidance is given, the task is designed to [[focusing | attract]] students&#039; attention to deep features of the domain . This is a complex task which is the topic of the following instruction. Students are not expected to succeeed at the task in this initial exploratory phase, but the hypothesis is that by identifying the deep structure of the domain, students would be more likely to learn better from the subsequent instruction. Thus, engaging with the task at the beginning will better prepare them for the formal knowledge which is to follow.&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
==Glossary==&lt;br /&gt;
==Research questions==&lt;br /&gt;
Will engagement with an initial exploratory phase of instruction promote student learning, even though the students are likely to fail at meeting the goals of the exploratory activity (designing a buffer solution with a specified pH and buffer capacacity)? &lt;br /&gt;
==Independent Variables==&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
==Results==&lt;br /&gt;
==Explanation==&lt;br /&gt;
==Further Information==&lt;br /&gt;
This study is currently under design and will be carried out at UBC following the break the university is taking for the winter olympics (data collection begins on March 1, 2010).&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
===References===&lt;br /&gt;
===Future Plans===&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10242</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10242"/>
		<updated>2009-12-04T23:04:43Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of the project is an ethnography. Independent variables are the domain, topic, task, # of prior invention tasks. and experience of the group (that is, is this a new group, or does it run together for a long time).&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of this study includes analysis of students&#039; transcripts while working on structured invention tasks.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
This is part 2 of the [[Roll IPL|IPL]] study previously completed by Roll.&lt;br /&gt;
&lt;br /&gt;
It is also related to the study of Dan Belenky and Tim Nokes about motivational benefits of IPL.&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kapur, M., &amp;amp; Lee, K. (2009). Designing for productive failure in mathematical problem solving. In Proceedings of the 31st annual conference of the cognitive science society. (pp. 2632-7). Austin, TX: Cognitive Science Society.&lt;br /&gt;
	&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10241</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10241"/>
		<updated>2009-12-04T23:04:02Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Future Plans */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of the project is an ethnography. Independent variables are the domain, topic, task, # of prior invention tasks. and experience of the group (that is, is this a new group, or does it run together for a long time).&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of this study includes analysis of students&#039; transcripts while working on structured invention tasks.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
This is part 2 of the [[Roll IPL|IPL]] study previously completed by Roll.&lt;br /&gt;
&lt;br /&gt;
It is also related to the study of Dan Belenky and Tim Nokes about motivational benefits of IPL.&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10240</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10240"/>
		<updated>2009-12-04T23:03:41Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Connections to Other Studies */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of the project is an ethnography. Independent variables are the domain, topic, task, # of prior invention tasks. and experience of the group (that is, is this a new group, or does it run together for a long time).&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of this study includes analysis of students&#039; transcripts while working on structured invention tasks.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
This is part 2 of the [[Roll IPL|IPL]] study previously completed by Roll.&lt;br /&gt;
&lt;br /&gt;
It is also related to the study of Dan Belenky and Tim Nokes about motivational benefits of IPL.&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10239</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10239"/>
		<updated>2009-12-04T23:00:25Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Independent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of the project is an ethnography. Independent variables are the domain, topic, task, # of prior invention tasks. and experience of the group (that is, is this a new group, or does it run together for a long time).&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of this study includes analysis of students&#039; transcripts while working on structured invention tasks.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10238</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10238"/>
		<updated>2009-12-04T22:58:49Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Dependent Variables */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
The first step of this study includes analysis of students&#039; transcripts while working on structured invention tasks.&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10237</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10237"/>
		<updated>2009-12-04T22:57:35Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Research questions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What scientific reasoning skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: How well do these skills transfer across topics and along time (that is, is there learning of scientific-reasoning skills from one invention task to the next?)&lt;br /&gt;
&lt;br /&gt;
Step 3: What support can best improve performance and learning of scientific reasoning skills?&lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10236</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10236"/>
		<updated>2009-12-04T22:52:09Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Glossary */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* [[invention task|Structured Invention Tasks]]&lt;br /&gt;
* [[metacognition|Self-Regulated Learning]]&lt;br /&gt;
* [[Scientific Reasoning]]&lt;br /&gt;
* [[Inquiry tasks]]&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10235</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10235"/>
		<updated>2009-12-04T22:45:46Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background &amp;amp; Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
Scientific reasoning skills consist an important class of SRL behaviors. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
This project focuses on scientific reasoning behaviors during inquiry, and studies the relationships between scientific reasoning behavior and domain learning and motivation.  I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The current project first seeks to identify the scientific reasoning skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10234</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10234"/>
		<updated>2009-12-04T22:42:09Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What scientific reasoning skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this project I evaluate whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
the current project first seeks to identify the SRL skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10233</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10233"/>
		<updated>2009-12-04T22:41:33Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates scientific reasoning behaviors within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What SRL skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this project I evaluate whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
the current project first seeks to identify the SRL skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10232</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10232"/>
		<updated>2009-12-04T22:35:37Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Abstract */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
Scientific inquiry tasks have the potential to help students acquire deep understanding of domain knowledge, as well as improve their scientific reasoning skills. This project investigates SRL behavior within one type of inquiry tasks - structured invention tasks.&lt;br /&gt;
The project uses qualitative and quantitative methods to answer three questions:&lt;br /&gt;
1. What SRL skills are being practiced during structured invention tasks?&lt;br /&gt;
2. How well do these transfer across topics and along time?&lt;br /&gt;
3. How can these skills be supported and improved?&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this project I evaluate whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
the current project first seeks to identify the SRL skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10222</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10222"/>
		<updated>2009-12-04T22:01:17Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background &amp;amp; Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this project I evaluate whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
the current project first seeks to identify the SRL skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10221</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10221"/>
		<updated>2009-12-04T22:01:03Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background &amp;amp; Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In this project I evaluate whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (IPL; Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In IPL students are asked to invent novel mathematical procedures prior to receiving direct instruction on the canonical procedures. IPL was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
the current project first seeks to identify the SRL skills that are being practiced in IPL. The second stage of the project assesses the transferability of these skills (across domain topics, and along time). Last, I will investigate the effect of supporting these skills on students&#039; domain and metacognitive learning.&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10216</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10216"/>
		<updated>2009-12-04T21:52:54Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* References */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
This project evaluates whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In Invention as Preparation for Learning students first attempt to invent a mathematical procedures to evaluate a target property (e.g., variability, probability, etc). Following the invention attempt students receive direct instruction on the canonical procedure, and practice it. Invention as Preparation for learning was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Kirschner, P. A., Sweller, J., &amp;amp; Clark, R. E. (2006). Why minimal guidance during instruction does not work: An analysis of the failure of constructivist, discovery, problem-based, experiential, and inquiry-based teaching. Educational Psychologist, 41(2), 75-86.&lt;br /&gt;
	&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
&lt;br /&gt;
Veermans, K., de Jong, T., &amp;amp; van Joolingen, W. R. (2000). Promoting self-directed learning in simulation-based discovery learning environments through intelligent support. Interactive Learning Environments, 8(3), 229-255.&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10215</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10215"/>
		<updated>2009-12-04T21:51:07Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background &amp;amp; Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
This project evaluates whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In Invention as Preparation for Learning students first attempt to invent a mathematical procedures to evaluate a target property (e.g., variability, probability, etc). Following the invention attempt students receive direct instruction on the canonical procedure, and practice it. Invention as Preparation for learning was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009).&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10195</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10195"/>
		<updated>2009-12-04T20:13:27Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
This project focuses on SRL behavior during scientific inquiry, and relationships between SRL behavior and domain learning and motivation. While traditional inquiry tasks have inherent benefits of letting students practice key self-regulatory skills, they were shown to be inefficient, and often unproductive, means of instruction. In the absence of adequate support, students often flounder and are lost within the infinite range of possibilities (Veermans, de Jong &amp;amp; van Joolingen, 2000). Consequently, students often fail to learn the target concepts, or at least do not learn them as efficiently as with direct instruction (Kirschner, Sweller &amp;amp; Clark, 2006).  &lt;br /&gt;
   This project evaluates whether supporting students’ metacognitive behavior in inquiry tasks helps students acquire better domain and scientific reasoning skills, without reducing the motivational benefits and high agency that students have in inquiry tasks. I focus on the Invention as Preparation for Learning framework (Schwartz &amp;amp; Taylor, 2004; Roll, Aleven &amp;amp; Koedinger, 2009). In Invention as Preparation for Learning students first attempt to invent a mathematical procedures to evaluate a target property (e.g., variability, probability, etc). Following the invention attempt students receive direct instruction on the canonical procedure, and practice it. Invention as Preparation for learning was shown to improve students’ domain knowledge and motivation (Kapur &amp;amp; Lee, 2009; Roll, Aleven &amp;amp; Koedinger, 2009; Schwartz &amp;amp; Taylor, 2004). At the same time, students demonstrated poor metacognitive behavior, and lack of learning at the metacognitive level (Roll, 2009). &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
* Structured Invention Tasks &lt;br /&gt;
* Self-Regulated Learning&lt;br /&gt;
* Scientific Reasoning&lt;br /&gt;
* Inquiry tasks&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10194</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10194"/>
		<updated>2009-12-04T19:56:25Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10193</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10193"/>
		<updated>2009-12-04T19:53:57Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Summary Table */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039; ||  Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10192</id>
		<title>Roll - Inquiry</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_-_Inquiry&amp;diff=10192"/>
		<updated>2009-12-04T19:50:28Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: New page: = Helping Students Become Better Scientists Using Structured Inquiry Tasks =  === Summary Table === {| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot; | &amp;#039;&amp;#039;&amp;#039;PIs&amp;#039;&amp;#039;&amp;#039; || I...&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= Helping Students Become Better Scientists Using Structured Inquiry Tasks =&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ido Roll&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributers&#039;&#039;&#039;&amp;lt;br&amp;gt;* &#039;&#039;&#039;Faculty: &#039;&#039;&#039;&amp;lt;br&amp;gt;* &#039;&#039;&#039;Staff: &#039;&#039;&#039; &lt;br /&gt;
| &amp;lt;br&amp;gt;Doug Bonn, James Day&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || Jan. 1, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || May. 31, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Site&#039;&#039;&#039; || UBC (not a LeanLab site)&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Physics&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || &#039;&#039;N&#039;&#039; = ~200&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || ~1,000.&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;DataShop&#039;&#039;&#039; || no data yet&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Abstract==&lt;br /&gt;
&lt;br /&gt;
==Background &amp;amp; Significance==&lt;br /&gt;
&lt;br /&gt;
==Glossary==&lt;br /&gt;
&lt;br /&gt;
==Research questions==&lt;br /&gt;
&lt;br /&gt;
The project has 3 steps, each of which focuses on a different research question:&lt;br /&gt;
&lt;br /&gt;
Step 1: What SRL skills are being used and practiced during structured invention tasks?&lt;br /&gt;
&lt;br /&gt;
Step 2: &lt;br /&gt;
&lt;br /&gt;
==Independent Variables==&lt;br /&gt;
&lt;br /&gt;
==Dependent Variables==&lt;br /&gt;
&lt;br /&gt;
==Hypothesis==&lt;br /&gt;
&lt;br /&gt;
==Results==&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&lt;br /&gt;
===Connections to Other Studies===&lt;br /&gt;
&lt;br /&gt;
===Annotated Bibliography===&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
Klahr, D., &amp;amp; Dunbar, K. (1988). Dual space search during scientific reasoning. Cognitive Science, 12(1), 1-48.&lt;br /&gt;
	&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K. R. (2009). Helping students know &#039;further&#039; - increasing the flexibility of students&#039; knowledge using symbolic invention tasks. In N. A. Taatgen, &amp;amp; H. van Rijn (Eds.), Proceedings of the 31st annual conference of the cognitive science society. (pp. 1169-74). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
Schwartz, D. L., &amp;amp; Martin, T. (2004). Inventing to prepare for future learning: The hidden efficiency of encouraging original student production in statistics instruction. Cognition and Instruction, 22(2), 129-184.&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
	&lt;br /&gt;
&lt;br /&gt;
===Future Plans===&lt;br /&gt;
&lt;br /&gt;
Spring 2010:&lt;br /&gt;
Do an ethnography in in a 1st year physics lab that uses invention tasks as a normal classroom practice.&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9504</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9504"/>
		<updated>2009-05-21T04:23:19Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Evaluation of goal 4: Improve future metacognitive behavior */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
As mentioned earlier, the two units of study 2 were spread apart by one month. We collected data from students&#039; behavior in the months between the two units and following the study. During these months students repeated previous material in preparation for statewide exams.&lt;br /&gt;
As seen in the table below, the main effects of the help-seeking environment persisted even once students moved on to working with the native Cognitive Tutor! Overall, students who received help-seeking support during the study took more time for their actions following the study, especially for reading hints - their hint reading time before asking for additional hint is longer by almost one effect size in the month following the study (12 vs. 8 seconds, t(44)=3.0, p&amp;lt;.01). Also, students who were in the Help condition did not drill down through the hints as often, though this effect is only marginal significant (average hint level: 2.2 vs. 2.6, t(51)=1.9, p=.06). These effects are more consistently significant after both units, suggesting that having the study stretched across two units indeed helped students better acquire the domain-independent help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
* The Help Seeking Support Environment intervened appropriate actions during the learning process.&lt;br /&gt;
* Students improved their help-seeking behavior while working with the system.&lt;br /&gt;
* Students demonstrated improved help-seeking behavior even once support was removed, after working with the Help Tutor along two months and two different topics. &lt;br /&gt;
* Furthermore, students acquired better help-seeking declarative knowledge.&lt;br /&gt;
* However, students&#039; domain learning did not improve.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
==== 1 Assessing help-seeking behavior  ====&lt;br /&gt;
These studies put forward several direct measures of help seeking. Most notably, we found that the online help-seeking model is able to capture faulty help-seeking behavior in a manner that is consistent with domain learning. The operational nature of the model (i.e., the fact that it can be run on a computer) puts it in unique position, compared to other models of help-seeking behavior. Furthermore, formative assessment using the model is done in a transparent manner that does not interrupt the learning process. These qualities of the model enable us to do a micro-genetic (moment-by-moment) analysis of help-seeking behavior over extended periods of time, quite unique in the literature on help seeking and on self-regulated learning. In addition, the detailed evaluation of students’ actions allows us to adapt the learning process to the learner in novel ways – that is, adapt not only to students’ domain knowledge and behavior, but also to their metacognitive knowledge and behavior. &lt;br /&gt;
Students’ online behavior as assessed by the help-seeking model also correlated with paper-and-pencil measures of help seeking knowledge and behavior. This result provides some support that the measures capture a metacognitive behavior that is domain and environment independent in nature. A combination of these instruments with domain learning measures can be used to investigate the different factors affecting learning behaviors and outcomes. &lt;br /&gt;
==== 2. Improving help-seeking behavior and knowledge ====&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
==== 3 Using metacognitive improvement to improve domain learning ====&lt;br /&gt;
The two studies did not find an effect of the improved help-seeking behavior on domain learning. One possible explanation might be that the help-seeking model focuses on the wrong actions, although the negative correlation with learning across studies suggests otherwise. Alternatively, perhaps the improvement in help-seeking behavior was not sufficient to measurably impact domain-level learning. However, we would expect to see at least a trend in the test scores, rather than the virtual tie we observed across the three units. &lt;br /&gt;
A third explanation (see paper #8 by authors) is that poor help-seeking behavior may be more a consequence of poor motivation toward domain learning than it is a cause.  Students who do not care to learn geometry may exhibit poor help seeking behaviors and learn less, thus yielding a correlation.  However, in this case, poor help seeking behavior may not reflect a lack of help-seeking skill per se, but a lack of motivation to deeply engage in help seeking and associated deep learning strategies (like attempting to encode instructional material in terms of deeper domain-relevant features rather than more superficial perceptual features).  Indeed Help condition students changed their help-seeking behaviors and appeared to do so in a lasting way, but perhaps associated changes in other deep learning strategies are needed before clear effects on domain learning can be observed.   &lt;br /&gt;
To elaborate, consider that the help-seeking model focuses on how deliberately students seek help, but does not evaluate the content of the help or how it is being used. It makes sure that students take enough time to read the hint, but does not assess how this information is being processed. One of the implicit assumptions of this line of research is that students can learn from given explanations in contextual hints.  However, it may be that this process is more challenging. For example, Renkl showed that at times instructional explanations actually hinder students’ tendency to self-explain and thus learn (Renkl, 2002). In other words, perhaps students do not learn enough from help not only because of how they obtain it, but also because of how they process it. Reevaluating the existing help-seeking literature supports this hypothesis. Very few experiments have actively manipulated help seeking in ITSs to date. Those who manipulated the content of the help or students’ reaction to it often found that interventions that increase reflection yield better learning: Dutke and Reimer (2000) found that principle-based hints are better than operative ones; Ringenberg and VanLehn (2006) found that analogous solved examples may be better than conventional hints; Schworm and Renkl (2002) found that deep reflection questions caused more learning compared with conventional hints; and Baker et al. (2006) showed that auxiliary exercises for students who misuse hints help them learn better. At the same time, Baker (2006) found that reducing help abuse (and other gaming behaviors) may not contribute to learning gains by itself, a similar result to the ones presented in this paper.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9503</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9503"/>
		<updated>2009-05-21T04:22:09Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Further Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
* The Help Seeking Support Environment intervened appropriate actions during the learning process.&lt;br /&gt;
* Students improved their help-seeking behavior while working with the system.&lt;br /&gt;
* Students demonstrated improved help-seeking behavior even once support was removed, after working with the Help Tutor along two months and two different topics. &lt;br /&gt;
* Furthermore, students acquired better help-seeking declarative knowledge.&lt;br /&gt;
* However, students&#039; domain learning did not improve.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
==== 1 Assessing help-seeking behavior  ====&lt;br /&gt;
These studies put forward several direct measures of help seeking. Most notably, we found that the online help-seeking model is able to capture faulty help-seeking behavior in a manner that is consistent with domain learning. The operational nature of the model (i.e., the fact that it can be run on a computer) puts it in unique position, compared to other models of help-seeking behavior. Furthermore, formative assessment using the model is done in a transparent manner that does not interrupt the learning process. These qualities of the model enable us to do a micro-genetic (moment-by-moment) analysis of help-seeking behavior over extended periods of time, quite unique in the literature on help seeking and on self-regulated learning. In addition, the detailed evaluation of students’ actions allows us to adapt the learning process to the learner in novel ways – that is, adapt not only to students’ domain knowledge and behavior, but also to their metacognitive knowledge and behavior. &lt;br /&gt;
Students’ online behavior as assessed by the help-seeking model also correlated with paper-and-pencil measures of help seeking knowledge and behavior. This result provides some support that the measures capture a metacognitive behavior that is domain and environment independent in nature. A combination of these instruments with domain learning measures can be used to investigate the different factors affecting learning behaviors and outcomes. &lt;br /&gt;
==== 2. Improving help-seeking behavior and knowledge ====&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
==== 3 Using metacognitive improvement to improve domain learning ====&lt;br /&gt;
The two studies did not find an effect of the improved help-seeking behavior on domain learning. One possible explanation might be that the help-seeking model focuses on the wrong actions, although the negative correlation with learning across studies suggests otherwise. Alternatively, perhaps the improvement in help-seeking behavior was not sufficient to measurably impact domain-level learning. However, we would expect to see at least a trend in the test scores, rather than the virtual tie we observed across the three units. &lt;br /&gt;
A third explanation (see paper #8 by authors) is that poor help-seeking behavior may be more a consequence of poor motivation toward domain learning than it is a cause.  Students who do not care to learn geometry may exhibit poor help seeking behaviors and learn less, thus yielding a correlation.  However, in this case, poor help seeking behavior may not reflect a lack of help-seeking skill per se, but a lack of motivation to deeply engage in help seeking and associated deep learning strategies (like attempting to encode instructional material in terms of deeper domain-relevant features rather than more superficial perceptual features).  Indeed Help condition students changed their help-seeking behaviors and appeared to do so in a lasting way, but perhaps associated changes in other deep learning strategies are needed before clear effects on domain learning can be observed.   &lt;br /&gt;
To elaborate, consider that the help-seeking model focuses on how deliberately students seek help, but does not evaluate the content of the help or how it is being used. It makes sure that students take enough time to read the hint, but does not assess how this information is being processed. One of the implicit assumptions of this line of research is that students can learn from given explanations in contextual hints.  However, it may be that this process is more challenging. For example, Renkl showed that at times instructional explanations actually hinder students’ tendency to self-explain and thus learn (Renkl, 2002). In other words, perhaps students do not learn enough from help not only because of how they obtain it, but also because of how they process it. Reevaluating the existing help-seeking literature supports this hypothesis. Very few experiments have actively manipulated help seeking in ITSs to date. Those who manipulated the content of the help or students’ reaction to it often found that interventions that increase reflection yield better learning: Dutke and Reimer (2000) found that principle-based hints are better than operative ones; Ringenberg and VanLehn (2006) found that analogous solved examples may be better than conventional hints; Schworm and Renkl (2002) found that deep reflection questions caused more learning compared with conventional hints; and Baker et al. (2006) showed that auxiliary exercises for students who misuse hints help them learn better. At the same time, Baker (2006) found that reducing help abuse (and other gaming behaviors) may not contribute to learning gains by itself, a similar result to the ones presented in this paper.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9502</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9502"/>
		<updated>2009-05-21T04:21:49Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Explanation */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
* The Help Seeking Support Environment intervened appropriate actions during the learning process.&lt;br /&gt;
* Students improved their help-seeking behavior while working with the system.&lt;br /&gt;
* Students demonstrated improved help-seeking behavior even once support was removed, after working with the Help Tutor along two months and two different topics. &lt;br /&gt;
* Furthermore, students acquired better help-seeking declarative knowledge.&lt;br /&gt;
* However, students&#039; domain learning did not improve.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
==== 1 Assessing help-seeking behavior  ====&lt;br /&gt;
These studies put forward several direct measures of help seeking. Most notably, we found that the online help-seeking model is able to capture faulty help-seeking behavior in a manner that is consistent with domain learning. The operational nature of the model (i.e., the fact that it can be run on a computer) puts it in unique position, compared to other models of help-seeking behavior. Furthermore, formative assessment using the model is done in a transparent manner that does not interrupt the learning process. These qualities of the model enable us to do a micro-genetic (moment-by-moment) analysis of help-seeking behavior over extended periods of time, quite unique in the literature on help seeking and on self-regulated learning. In addition, the detailed evaluation of students’ actions allows us to adapt the learning process to the learner in novel ways – that is, adapt not only to students’ domain knowledge and behavior, but also to their metacognitive knowledge and behavior. &lt;br /&gt;
Students’ online behavior as assessed by the help-seeking model also correlated with paper-and-pencil measures of help seeking knowledge and behavior. This result provides some support that the measures capture a metacognitive behavior that is domain and environment independent in nature. A combination of these instruments with domain learning measures can be used to investigate the different factors affecting learning behaviors and outcomes. &lt;br /&gt;
==== 2. Improving help-seeking behavior and knowledge ====&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
==== 3 Using metacognitive improvement to improve domain learning ====&lt;br /&gt;
The two studies did not find an effect of the improved help-seeking behavior on domain learning. One possible explanation might be that the help-seeking model focuses on the wrong actions, although the negative correlation with learning across studies suggests otherwise. Alternatively, perhaps the improvement in help-seeking behavior was not sufficient to measurably impact domain-level learning. However, we would expect to see at least a trend in the test scores, rather than the virtual tie we observed across the three units. &lt;br /&gt;
A third explanation (see paper #8 by authors) is that poor help-seeking behavior may be more a consequence of poor motivation toward domain learning than it is a cause.  Students who do not care to learn geometry may exhibit poor help seeking behaviors and learn less, thus yielding a correlation.  However, in this case, poor help seeking behavior may not reflect a lack of help-seeking skill per se, but a lack of motivation to deeply engage in help seeking and associated deep learning strategies (like attempting to encode instructional material in terms of deeper domain-relevant features rather than more superficial perceptual features).  Indeed Help condition students changed their help-seeking behaviors and appeared to do so in a lasting way, but perhaps associated changes in other deep learning strategies are needed before clear effects on domain learning can be observed.   &lt;br /&gt;
To elaborate, consider that the help-seeking model focuses on how deliberately students seek help, but does not evaluate the content of the help or how it is being used. It makes sure that students take enough time to read the hint, but does not assess how this information is being processed. One of the implicit assumptions of this line of research is that students can learn from given explanations in contextual hints.  However, it may be that this process is more challenging. For example, Renkl showed that at times instructional explanations actually hinder students’ tendency to self-explain and thus learn (Renkl, 2002). In other words, perhaps students do not learn enough from help not only because of how they obtain it, but also because of how they process it. Reevaluating the existing help-seeking literature supports this hypothesis. Very few experiments have actively manipulated help seeking in ITSs to date. Those who manipulated the content of the help or students’ reaction to it often found that interventions that increase reflection yield better learning: Dutke and Reimer (2000) found that principle-based hints are better than operative ones; Ringenberg and VanLehn (2006) found that analogous solved examples may be better than conventional hints; Schworm and Renkl (2002) found that deep reflection questions caused more learning compared with conventional hints; and Baker et al. (2006) showed that auxiliary exercises for students who misuse hints help them learn better. At the same time, Baker (2006) found that reducing help abuse (and other gaming behaviors) may not contribute to learning gains by itself, a similar result to the ones presented in this paper.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9501</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9501"/>
		<updated>2009-05-21T04:20:24Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Summary of results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
* The Help Seeking Support Environment intervened appropriate actions during the learning process.&lt;br /&gt;
* Students improved their help-seeking behavior while working with the system.&lt;br /&gt;
* Students demonstrated improved help-seeking behavior even once support was removed, after working with the Help Tutor along two months and two different topics. &lt;br /&gt;
* Furthermore, students acquired better help-seeking declarative knowledge.&lt;br /&gt;
* However, students&#039; domain learning did not improve.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9500</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9500"/>
		<updated>2009-05-21T04:18:23Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Evaluation of goal 4: Improve future metacognitive behavior */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In addition, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
- The Help Seeking Support Environment intervened appropriate actions during the learning process&lt;br /&gt;
- Students improved their help-seeking behavior while working with the system&lt;br /&gt;
- Students acquired better help-seeking declarative knowledge following the system&lt;br /&gt;
- However, students&#039; domain learning did not improve&lt;br /&gt;
- Also, the improvement in students&#039; help seeking behavior did not persist beyond the tutoring system.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9499</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9499"/>
		<updated>2009-05-21T04:16:55Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: Undo revision 9497 by Idoroll (Talk)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
To evaluate whether the effect of the help-seeking curriculum persists beyond the tutored environment, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
- The Help Seeking Support Environment intervened appropriate actions during the learning process&lt;br /&gt;
- Students improved their help-seeking behavior while working with the system&lt;br /&gt;
- Students acquired better help-seeking declarative knowledge following the system&lt;br /&gt;
- However, students&#039; domain learning did not improve&lt;br /&gt;
- Also, the improvement in students&#039; help seeking behavior did not persist beyond the tutoring system.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9498</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9498"/>
		<updated>2009-05-21T04:16:23Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_behavior.jpg]]&lt;br /&gt;
&lt;br /&gt;
This data suggests that while the system was not able to improve all aspects of the desired metacognitive behavior, it did improve students&#039; behavior on the common types of errors.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
To evaluate whether the effect of the help-seeking curriculum persists beyond the tutored environment, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
- The Help Seeking Support Environment intervened appropriate actions during the learning process&lt;br /&gt;
- Students improved their help-seeking behavior while working with the system&lt;br /&gt;
- Students acquired better help-seeking declarative knowledge following the system&lt;br /&gt;
- However, students&#039; domain learning did not improve&lt;br /&gt;
- Also, the improvement in students&#039; help seeking behavior did not persist beyond the tutoring system.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9497</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9497"/>
		<updated>2009-05-21T04:14:48Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Evaluation of goal 2: Improve metacognitive behavior */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
To evaluate whether the effect of the help-seeking curriculum persists beyond the tutored environment, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
- The Help Seeking Support Environment intervened appropriate actions during the learning process&lt;br /&gt;
- Students improved their help-seeking behavior while working with the system&lt;br /&gt;
- Students acquired better help-seeking declarative knowledge following the system&lt;br /&gt;
- However, students&#039; domain learning did not improve&lt;br /&gt;
- Also, the improvement in students&#039; help seeking behavior did not persist beyond the tutoring system.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Four_months_of_HT.jpg&amp;diff=9496</id>
		<title>File:Four months of HT.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Four_months_of_HT.jpg&amp;diff=9496"/>
		<updated>2009-05-21T04:14:22Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9495</id>
		<title>The Help Tutor Roll Aleven McLaren</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=The_Help_Tutor_Roll_Aleven_McLaren&amp;diff=9495"/>
		<updated>2009-05-21T04:12:43Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Evaluation of goal 2: Improve metacognitive behavior */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Towards Tutoring [[Metacognition]] - The Case of Help Seeking ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll, Bruce M. McLaren, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: EJ Ryu (programmer)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 2004       || 2004     || Analysis of existing data || 40 || 280 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 2005       || 2005     || Analysis of existing data || 70 || 105 || No, old data&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 5/2005     || 5/2005   || Hampton &amp;amp; Wilkinsburg (Geometry) || 60 || 270 || No, incompatible format&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 2/2006     || 4/2006   || CWCTC (Geometry)          || 84 || 1,008 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
While working with a tutoring system, students are expected to regulate their own learning process. However, often, they demonstrate inadequate [[metacognition|metacognitive process]] in doing so. For example, students often ask for help too frequently or not frequently enough. &lt;br /&gt;
In this project we built an Intelligent Tutoring System to teach [[metacognition]], and in particular, to improve students&#039; [[help-seeking behavior]].  Our Help Seeking Support Environment includes three components:&lt;br /&gt;
# Direct [[help-seeking behavior|help seeking]] [[explicit instruction|instruction]], given by the teacher&lt;br /&gt;
# A [[Self-Assessment]] Tutor, to help students evaluate their own need for help&lt;br /&gt;
# The Help Tutor - a domain-independent agent that can be added as an adjunct to a [[cognitive tutor]]. Rather than making help-seeking decisions for the students, the Help Tutor teaches better help-seeking skills by tracing students actions on a (meta)cognitive [[help-seeking model]] and giving students appropriate feedback. &lt;br /&gt;
&lt;br /&gt;
In a series of [[in vivo experiment]]s, the Help Tutor accurately detected help-seeking errors that were associated with poorer learning and with poorer [[declarative]] and [[procedural]] [[knowledge component]]s of help seeking.  The main findings were that students made fewer help-seeking errors while working with the Help Tutor and acquired better help seeking [[declarative]] [[knowledge component]]s. &lt;br /&gt;
However, we did not find evidence that this led to an improvement in learning at the domain level or to better [[help-seeking behavior]] in a paper-and-pencil environment. &lt;br /&gt;
We pose a number of hypotheses in an attempt to explain these results. We question the current focus of metacognitive tutoring, and suggest ways to reexamine the role of [[help facilities]] and of metacognitive tutoring within Intelligent Tutoring Systems.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Not only that teaching [[metacognition]] holds the promise of improving current learning of the domain of interest, but also, or even mainly, it can accelerate future learning and successful regulation of independent learning. One example to metacognitive knowledge is help-seeking [[knowledge component]]s: The ability to identify the need for help, and to elicit appropriate assistance from the [[relevant resources|help facilities].  &lt;br /&gt;
However, considerable evidence shows that metacognitive [[knowledge component]]s are in need of better support. For example, while working with Intelligent Tutoring Systems, students try to &amp;quot;[[game the system]]&amp;quot; or do not [[self-explanation|self-explain]] enough. Similarly, research shows that students&#039; [[help-seeking behavior]] leaves much room for improvement. &lt;br /&gt;
&lt;br /&gt;
==== Shallow help seeking [[knowledge component]]s ====&lt;br /&gt;
Research shows that students do not use their help-seeking konwledge components approrpiately. For example, Aleven et al. (2006) show that 30% of students&#039; actions were consecutive fast help requests (a common form of [[help abuse]], termed &#039;[[clicking through hints]]&#039;), without taking enough time to read the requested hints.  &lt;br /&gt;
Extensive log-file analysis suggests that students apply faulty [[knowledge component]]s such as the following:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[procedural]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
Cognitive aspects:&lt;br /&gt;
  If I don’t know the answer =&amp;gt; &lt;br /&gt;
  I should guess&lt;br /&gt;
&lt;br /&gt;
Motivational aspects:&lt;br /&gt;
  If I get the answer correct =&amp;gt;&lt;br /&gt;
  I achieved the goal&lt;br /&gt;
&lt;br /&gt;
Social aspects:&lt;br /&gt;
  If I ask for help =&amp;gt;&lt;br /&gt;
  I am weak&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Faulty [[declarative]] [[knowledge component]]s:&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
  Asking for hints will always reduce my skill level&lt;br /&gt;
&lt;br /&gt;
  Making an error is better than asking for a hint&lt;br /&gt;
&lt;br /&gt;
  Only weak people ask for help&lt;br /&gt;
&lt;br /&gt;
==== Teaching vs. supporting [[metacognition]] ====&lt;br /&gt;
&lt;br /&gt;
Several systems support students&#039; metacognitive actions in a way that encourages, or even forces, students to learn productively and efficiently. For example, a tutoring system can require the student to self-explain. While this approach is likely to improve domain learning in the supported environment, the effect is not likely to persist beyond the scope of the tutoring system, and therefore is not likely to help students become better future learners. &lt;br /&gt;
&lt;br /&gt;
Towards that end, we chose not to &#039;&#039;&#039;support&#039;&#039;&#039; students&#039; help seeking actions, but to &#039;&#039;&#039;teach&#039;&#039;&#039; them better help-seeking skills. Rather than making the metacognitive decisions for the students (for example, by preventing help-seeking errors or gaming opportunities), this study focuses on helping students refine their Help Seeking [[knowledge component]]s and acquire better [[feature validity]] of their [[help-seeking behavior|help-seeking]] [[metacognition|metacognitive skills]].&lt;br /&gt;
&lt;br /&gt;
By doing so, we examine whether metacognitive knowledge can be taught using familiar conventional domain-level pedagogies.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:Help Tutor|Help Tutor Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Can conventional and well-established instructional principles in the domain level be used to tutor [[metacognition|metacognitive]] [[knowledge component]]s such as [[help-seeking behavior|Help Seeking]] [[knowledge component]]s?&lt;br /&gt;
# Does the practice of better metacognitive behavior translates, in turn, to better domain learning?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# An improved understanding of the nature of help-seeking knowledge and its acquisition.&lt;br /&gt;
# A novel framework for the design of  goals, interaction and assessment for metacognitive tutoring.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Two studies were performed with the Help Tutor. In both studies the independent variable was the existence of help seeking support.&lt;br /&gt;
Control condition used the conventional Geometry Cognitive Tutor&lt;br /&gt;
&lt;br /&gt;
[[Image:Geometry Cognitive Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
The treatment condition varied between studies:&lt;br /&gt;
* Study one: The Geometry Cognitive Tutor + the Help Tutor&lt;br /&gt;
* Study two: The Geometry Cognitive Tutor + the Help Seeking Support Environment (help seeking explicit instruction, self-assessment tutor, and Help Tutor)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Help Tutor:&#039;&#039;&#039;&lt;br /&gt;
The Help Tutor is an a Cognitive Tutor in its own right that identifies recommended types of actions by tracing students’ interaction with the Geometry Cognitive Tutor relative to a metacognitive help-seeking model. When students perform actions that deviate from the recommended ones, the Help Tutor presents a message that stresses the recommended action to be taken. Messages from the metacognitive Help Tutor and the domain-level Cognitive Tutor are coordinated, so that the student receives only the most helpful message at each point [2].   &lt;br /&gt;
&lt;br /&gt;
[[Image:The Help-tutor.jpg]]&lt;br /&gt;
[[image:The Help Seeking Model.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;The Self-Assessment Tutor:&#039;&#039;&#039;&lt;br /&gt;
The ability to correctly self-assess one’s own knowledge level is correlated with strategic use of help (Tobias and Everson, 2002). The Self-Assessment Tutor is designed to tutor students on &lt;br /&gt;
their self-assessment skills; to help students make appropriative learning decisions based on their self assessment; and mainly, to give students a tutoring environment, low on cognitive load, in which they can practice using their help-seeking skills.  &lt;br /&gt;
The curriculum used by the Treatment group in study two consists of interleaving Self Assessment and Cognitive Tutor + Help Tutor sessions, with the Self Assessment sessions taking about 10% of the students’ time. During each self-assessment session the student assesses the skills to be practiced in the subsequent Cognitive Tutor section.&lt;br /&gt;
  &lt;br /&gt;
[[Image:The Self-Assessment Tutor.jpg]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Explicit help-seeking instruction:&#039;&#039;&#039;&lt;br /&gt;
As White and Frederiksen demonstrated (1998), reflecting in the classroom environment on the desired metacognitive process helps students internalize it. With that goal in mind, we created a short classroom lesson about help seeking with the following objectives: to give students a better declarative understanding of desired and effective help-seeking behavior; to improve their dispositions and attitudes towards seeking help; and to frame the help-seeking knowledge as an important learning goal, alongside Geometry knowledge, for the coming few weeks. The instruction includes a video presentation with examples of productive and faulty help-seeking behavior and the appropriate help-seeking principles. &lt;br /&gt;
&lt;br /&gt;
[[Image:Explicit help-seeking instruction.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
# Directly assessing Help Seeking skills&lt;br /&gt;
# Assessing domain-level learning, and by that evaluating the contribution of the help-seeking skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1. Assessments of help-seeking konwledge:&lt;br /&gt;
* [[Normal post-test]]: &lt;br /&gt;
** Declarative: hypothetical help-seeking dilemmas&lt;br /&gt;
** Procedural: Help seeking error rate while working with the tutor&lt;br /&gt;
* [[Transfer]]: Ability to use optional hints embedded within certain test items in the paper test.&lt;br /&gt;
&lt;br /&gt;
[[Image:embedded hints.jpg]]&lt;br /&gt;
&lt;br /&gt;
2. Assessments of domain konwledge:&lt;br /&gt;
* [[Normal post-test]]: Problem solving and explanation items like those in the tutor&#039;s instruction.&lt;br /&gt;
* [[Transfer]]: &lt;br /&gt;
** Data insufficiency (or &amp;quot;not enough information&amp;quot;) items.&lt;br /&gt;
** Conceptual understanding items (study two only)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
The combination of explicit help-seeking instruction, on-time feedback on help seeking errors, and raising awareness to knowledge deficits will&lt;br /&gt;
* Improve [[feature validity]] of students&#039; help seeking skills&lt;br /&gt;
and thus, in turn, will&lt;br /&gt;
* Improve learning of domain knowledge by using those skills effectively.&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
&lt;br /&gt;
The main principles being evaluated here is whether [[Roll_help seeking principle | instruction should support meta-cognition in the context of problem solving]] by using principles of cognitive tutoring such as:&lt;br /&gt;
* Giving direct instruction&lt;br /&gt;
* Giving immediate feedback on errors&lt;br /&gt;
* Prompting for self-assessment&lt;br /&gt;
&lt;br /&gt;
This utilizes the following instructional principles:&lt;br /&gt;
&lt;br /&gt;
* The Self-Assessment Tutor utilizes the [[Reflection questions]] principle&lt;br /&gt;
* The Help Tutor itself utilizes the [[Tutoring feedback]] principle&lt;br /&gt;
* The Help Seeking Instruction utilizes the [[Explicit instruction]] principle.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
As seen below (adapted from Roll et al. 2006), metacognitive tutoring has the following goals:&lt;br /&gt;
# First, the tutoring system should capture metacognitive errors (in our case, help-seeking errors).&lt;br /&gt;
# Then, it should lead to an improved metacognitive behavior within the tutoring system.&lt;br /&gt;
# This, in turn, should lead to an improvement in the domain learning.&lt;br /&gt;
# The effect should persist beyond the scope of the tutoring system.&lt;br /&gt;
# As a result, students are expected to become better future learners.&lt;br /&gt;
&lt;br /&gt;
[[Image:Roll_Pyramid.jpg]] &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 1: Capture metacognitive errors ====&lt;br /&gt;
&lt;br /&gt;
In study 1, 17% of students actions were classified as errors. These errors were singificantly negatively correlated with learning (r=-0.42) - the more help-seeking errors captured by the system, the smaller the improvement from pre- to post-test.&lt;br /&gt;
This data suggests that the help-seeking model captures appropriate actions, and that the goal was achieved - the Help Tutor captures help-seeking errors.&lt;br /&gt;
&lt;br /&gt;
[[Image:help-seeking_and_learning.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 2: Improve metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
Students’ hint usage improved significantly while working with the Help Tutor across several measures, most notably, help-seeking error rate, frequency of asking for bottom-out hints, and hint reading time. Study 2 also shows that these aspects of improvement persist even beyond the scope of the Help Tutor. These improvements consistently reached significance only after an extended period of time of working with the Help Tutor (i.e., after the second part of the study). We hypothesize that since the Help Tutor feedback appeared in two different areas of geometry learning, students could more easily acquire the domain-independent help-seeking skills and thus transfer them better to the other subject matter areas addressed between and after the two in which the Help Tutor was used. &lt;br /&gt;
The addition of the help-seeking instruction and self-assessment episodes in Study 2 led to improvements in students’ conceptual help seeking knowledge, time to read hints, and an apparent improvement in hint-to-error ratio. The fact that students were more likely to ask for hints rather than committing more errors provide behavioral evidence that self-assessment instruction provided by both the Self-Assessment tutor and by certain messages in the Help Tutor appear to lead students to be more aware of their need for help, reinforcing the causal link between self-assessment and strategic help seeking (Tobias &amp;amp; Everson, 2002). &lt;br /&gt;
Although students’ hint usage improved, no major improvements in the deliberateness of students’ solution attempts were found (besides that indicated by the differences in hint-to-error ratio). It may be that the Help Tutor did not support this aspect of learning well enough. An alternative explanation is that as long as students attempted to solve problems (whether successfully or not) they were too occupied by the problem-solving attempts and thus did not pay attention (and cognitive resources) to the Help Tutor until they reached an impasse that required them to ask for help. This may be an outcome of our design decision to give feedback in the context of domain learning. &lt;br /&gt;
&lt;br /&gt;
[[Image:four months of HT.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 3: Improve domain learning ====&lt;br /&gt;
&lt;br /&gt;
While students&#039; help seeking behavior improved while working with the Help Tutor (in study 1) or the full Help Seeking Support Environment (in study 2), we did not observe differences in learning between the two conditions on neither study 1 nor study 2.&lt;br /&gt;
&lt;br /&gt;
Study 1 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_1_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
Study 2 results:&lt;br /&gt;
&lt;br /&gt;
[[Image:study_2_results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* Notice, since the tests in times 1 and 2 evaluated different instructional units (Angles vs. Quadrilaterals), lower grades at time 2 do not suggest decrease in knowledge.&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 4: Improve future metacognitive behavior ====&lt;br /&gt;
&lt;br /&gt;
To evaluate whether the effect of the help-seeking curriculum persists beyond the tutored environment, students&#039; help seeking behavior was evaluated in a transfer environment - the paper and pencil tests.&lt;br /&gt;
&lt;br /&gt;
Hypothetical help seeking dilemmas, such as the one described below, were used to evaluate declarative help-seeking knowledge.&lt;br /&gt;
&lt;br /&gt;
  1. You tried to answer a question that you know, but for some reason the tutor says that your answer is wrong. What should you do? &lt;br /&gt;
  [ ] First I would review my calculations. Perhaps I can find the mistake myself? &lt;br /&gt;
  [ ] The Tutor must have made a mistake. I will retype the same answer again. &lt;br /&gt;
  [ ] I would ask for a hint, to understand my mistake.&lt;br /&gt;
&lt;br /&gt;
Procedural help-seeking skills were evaluated using embedded hints in the tests (see figure in the Dependant Measures section above).&lt;br /&gt;
&lt;br /&gt;
In study 1 (which included only the Help Tutor component,) students in the Treatment condition demonstrated neither better declarative nor procedural help-seeking knowledge, compared with the Control condition.&lt;br /&gt;
&lt;br /&gt;
In study 2 (which included the explicit help-seeking instruction and the Self-Assessment tutor in addition to the Help Tutor) students in in the Treatment condition demonstrated better declarative help-seeking knowledge (compared with Control group students) but no better procedural knowledge&lt;br /&gt;
&lt;br /&gt;
[[Image:Declarative_knowledge.jpg]]&lt;br /&gt;
[[Image:Procedural_knowledge.jpg]]&lt;br /&gt;
&lt;br /&gt;
==== Evaluation of goal 5: Improve future domain learning ====&lt;br /&gt;
&lt;br /&gt;
Due to technical difficulties, this goal was not evaluated in both studies.&lt;br /&gt;
&lt;br /&gt;
==== Summary of results ====&lt;br /&gt;
&lt;br /&gt;
Overall, the following pattern of results emerges from the studies:&lt;br /&gt;
- The Help Seeking Support Environment intervened appropriate actions during the learning process&lt;br /&gt;
- Students improved their help-seeking behavior while working with the system&lt;br /&gt;
- Students acquired better help-seeking declarative knowledge following the system&lt;br /&gt;
- However, students&#039; domain learning did not improve&lt;br /&gt;
- Also, the improvement in students&#039; help seeking behavior did not persist beyond the tutoring system.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
These somewhat disappointing results raise an important question: Why did the environment not lead to an improvement in learning and in help-seeking behavior in the paper-test measures? Why did the improved online help-seeking behavior not lead to improved learning gains?&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 1: Students do not have the skills, but we didn&#039;t teach them right. ==== &lt;br /&gt;
One possible explanation may be that the Help Seeking Support Environment requires excessive cognitive load during problem solving. Clearly, the learning process with the Help Seeking Support Environment is more demanding compared with the conventional Cognitive Tutor alone, since more needs to be learned. However, much of the extra content is introduced during the classroom discussion and self-assessment sessions. The only extra content presented during the problem-solving sessions are the Help Tutor’s error messages, but they are not expected to increase the load much, especially given that a prioritization algorithm makes sure students receive only one message at a time (either from the Help Tutor or the [[cognitive tutor]]). Also, if the Help Seeking Support Environment indeed creates requires too much cognitive load, it should be expected to hinder learning, which we did not observe.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 2: The role of help seeking in ITS ==== &lt;br /&gt;
Hints in tutoring systems have two objectives: to promote learning of challenging skills, and to help students move forward within the curriculum (i.e., to prevent them from getting stuck). While the latter is achieved easily with both the Cognitive Tutor and the Help Seeking Support Environment, achieving the first is much harder. It is not yet clear what makes a hint into a good hint, and how to create an effective [[hint sequence]]. It is possible that the hints, as implemented in the units of the Cognitive Tutor we used, are not optimal. For example, there may be too many levels of hints, with each level adding too little information to the previous one. Also, perhaps the detailed explanations are too demanding with regard to students’ reading comprehension ability. It is quite possible that these hints, regardless of how they are being used, are above students&#039; [[zone of proximal development]], and thus do not contribute much to learning. Support for that idea comes from Schworm and Renkl (2002). They found that offering explanations by the system impaired learning when [[self-explanation]] was required. The Geometry Cognitive Tutor prompts for [[self-explanation]] in certain units. Perhaps elaborated hints are redundant, or even damaging, when [[self-explanation]] is required. &lt;br /&gt;
It is possible also that [[help-seeking behavior]] that we currently view as faulty may actually be useful and desirable, in specific contexts for specific students. For example, perhaps a student who does not know the material should be allowed to view the bottom-out hint immediately, in order to turn the problem into a solved example. Support for that idea can be found in a work by Yudelson et al. (2006), in which medical students in a leading med school successfully learned by repeatedly asking for more elaborated hints. Such “clicking-though hints” behavior would be considered faulty by Help Seeking Support Environment. However, this population of students is known to have good metacognitive skills (without them it is unlikely they would have reached their current position). Thus, it seems that sometimes “misusing” help (e.g., [[help abuse]], according to the Help Seeking Support Environment) can be beneficial to some students. Further evidence can be found in Baker et al. (2004), who showed that some (but not all) students who “[[game the system]]” (i.e., click through hints or guess repeatedly) learn just as much as students who do not game. It may be the case that certain gaming behaviors are adaptive, and not irrational. Students who use these strategies will insist on viewing the [[bottom out hint]] and will ignore all intermediate hints, whether domain-level or metacognitive. Once intermediate hints are ignored, better [[help-seeking behavior]] according to the Help Seeking Support Environment should have no effect whatsoever on domain [[knowledge component]]s, as indeed was seen.&lt;br /&gt;
It is possible that we are overestimating students’ ability to learn from hints. Our first recommendation is to re-evaluate the role of hints in [[cognitive tutor]]s using complementary methodologies such as log-file analysis (e.g., Chang (2006) uses dynamic Bayes nets to evaluate the contribution of hints in a reading tutor); tracing individual students (to evaluate the different uses students make of hints), experimentation of different types of hints (for example, proactive vs. on demand), and analysis of human tutors who aid students while working with [[cognitive tutor]]s.&lt;br /&gt;
&lt;br /&gt;
==== Hypothesis 3: The focus of metacognitive tutoring in ITS. ====&lt;br /&gt;
The previous hypothesis, focused on students’ tendency to skip hints, suggests that perhaps the main issue is not lack of knowledge, but lack of motivation. In other words, &#039;&#039;&#039;perhaps students already have these skills in place, but choose not to use them.&#039;&#039;&#039; For students who ignore intermediate hints, metacognitive messages offer little incentive. While the Help Seeking Support Environment can increase the probability that a proper hint level appears on the screen, it has no influence on whether it is being read, or whether the student attempts to understand it. Students may ignore the messages for several reasons. For example, they may habitually click through hints, and may resent the changes that the Help Seeking Support Environment imposes. This idea is consistent with the teachers’ observation that the students were not fond of the Help Seeking Support Environment error messages. They may comply with them, in order to make progress, but beyond that will ignore their content. The test data discussed above provides support for this idea. On 7 out of the 12 hint evaluations (seen in the findings section of goal 4) students scored lower on items with hints than on items with no hints. A [[cognitive headroom]] explanation does not account for this difference, since the Request hints did not add much load. A more likely explanation is that students chose to skip the hints since they were new to them in the given context. Baker (2005) reviewed several reasons for why students [[game the system]]. While no clear answer was given, the question is applicable here as well. &lt;br /&gt;
Motivational issues bring us to our final hypothesis. Time preference discount (Feldstein, 1964) is a term coined in economics, that describes behavior in which people would rather have a smaller immediate reward over a distant greater reward. In the tutoring environment, comparing the benefit of immediate correct answer with the delayed benefit (if any) of acting metacognitively correct may often lead the student to choose the first. If that is indeed the case, then students may already have the right metacognitive skills in place. The question we should be asking ourselves is not only how to get students to learn the desired metacognitive skills – but mainly, how to get students to use them.&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* The Help Tutor attempts to extend traditional tutoring beyond the common domains. In that, it is similar to the work of Amy Ogan on tutoring [[FrenchCulture | French Culture]]&lt;br /&gt;
&lt;br /&gt;
* The manipulation of interaction between the student and the tutor, which is &amp;quot;natural&amp;quot; in the control condition, is guided by the help tutor.  This is similar to the scripting manipulation of the [[Rummel Scripted Collaborative Problem Solving]] and the [[Walker A Peer Tutoring Addition]] projects.&lt;br /&gt;
&lt;br /&gt;
* Another example for studying the effects of hints is [[Ringenberg Examples-as-Help|Ringenberg&#039;s study]], in which hints are compared to examples. &lt;br /&gt;
&lt;br /&gt;
* Going to do an in-vivo study at a LearnLab site? Check out how to answer [[FAQ for teachers|teacher&#039;s FAQ]]&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
Plans for June 2007 - Dec. 2007:&lt;br /&gt;
* Present the study in the International Conference on Artificial Intelligence on Education&lt;br /&gt;
* Submit camera ready copy of the paper to the Journal on Metacognition and Instruction&lt;br /&gt;
* Analyze the logfiles for Metacognitive learning&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
# Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th International Conference on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th International Conference on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press. [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int Journal of Artificial Intelligence in Education(16), 101-30 [[http://www.cs.cmu.edu/~aleven/publications.html#help-seeking pdf]]&lt;br /&gt;
# Baker, R.S., Corbett, A.T., &amp;amp; Koedinger, K.R. (2004) Detecting Student Misuse of Intelligent Tutoring Systems. in proceedings of 7Th International Conference on Intelligent Tutoring Systems, 531-40.&lt;br /&gt;
# Baker, R.S., Roll, I., Corbett, A.T., &amp;amp; Koedinger, K.R. (2005) Do Performance Goals Lead Students to Game the System? in proceedings of 12Th International Conference on Artificial Intelligence in Education, 57-64. Amsterdam, The Netherlands: IOS Press.&lt;br /&gt;
# Chang, K.K., Beck, J.E., Mostow, J., &amp;amp; Corbett, A. (2006) Does Help Help? A Bayes Net Approach to Modeling Tutor Interventions. in proceedings of Workshop on Educational Data Mining at AAAI 2006, 41-6. Menlo Park, California: AAAI.&lt;br /&gt;
# Feldstein, M.S. (1964). The Social Time Presence Discount Rate in Cost-Benefit Analysis. The Economic Journal 74(294), 360-79&lt;br /&gt;
# Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th International Conference on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Can help seeking be tutored? Searching for the secret sauce of metacognitive tutoring. International Conference on Artificial Intelligence in Education, , 203-10. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Roll, I., Aleven, V., McLaren, B. M., &amp;amp; Koedinger, K. R. (2007). Designing for metacognition - applying cognitive tutor principles to the tutoring of help seeking. Metacognition and Learning, 2(2). [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
# Schworm, S., &amp;amp; Renkl, A. (2002) Learning by solved example problems: Instructional explanations reduce self-explanation activity. in proceedings of The 24Th Annual Conference of the Cognitive Science Society, 816-21. Mahwah, NJ: Erlbaum.&lt;br /&gt;
# Yudelson, M.V., Medvedeva, O., Legowski, E., Castine, M., Jukic, D., &amp;amp; Crowley, R.S. (2006) Mining Student Learning Data to Develop High Level Pedagogic Strategy in a Medical ITS. in proceedings of Workshop on Educational Data Mining at AAAI 2006, Menlo Park, CA: AAAI.# Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th International Conference on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag. [[http://www.andrew.cmu.edu/user/iroll/Publications.html pdf]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9494</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9494"/>
		<updated>2009-05-21T03:58:37Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found in a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture differences between conditions.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (marginally significant: F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study had begun. Students who reported to have higher test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no design condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating (IPL activities: 25%; show-and-practice activities: 7%)&lt;br /&gt;
&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== The Invention Lab ===&lt;br /&gt;
In addition to paper and pencil studies, we have created the Invention Lab.&lt;br /&gt;
&lt;br /&gt;
The Invention Lab is an intelligent tutoring system for IPL. To give intelligent feedback, it uses two models:&lt;br /&gt;
* A meta-cognitive model of the invention process&lt;br /&gt;
* A cognitive model of the main concepts in the domain&lt;br /&gt;
&lt;br /&gt;
[[Image:iLab.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:ILab.jpg&amp;diff=9493</id>
		<title>File:ILab.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:ILab.jpg&amp;diff=9493"/>
		<updated>2009-05-21T03:56:28Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9492</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9492"/>
		<updated>2009-05-21T03:55:58Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Further Information */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found on a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture difference between the groups.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study began. Students who reported to have lower test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no desin condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating.&lt;br /&gt;
&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== The Invention Lab ===&lt;br /&gt;
In addition to paper and pencil studies, we have created the Invention Lab.&lt;br /&gt;
&lt;br /&gt;
The Invention Lab is an intelligent tutoring system for IPL. To give intelligent feedback, it uses two models:&lt;br /&gt;
* A meta-cognitive model of the invention process&lt;br /&gt;
* A cognitive model of the main concepts in the domain&lt;br /&gt;
&lt;br /&gt;
[[Image:iLab.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Motivational_results.jpg&amp;diff=9491</id>
		<title>File:Motivational results.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Motivational_results.jpg&amp;diff=9491"/>
		<updated>2009-05-21T03:54:10Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Test_results.jpg&amp;diff=9490</id>
		<title>File:Test results.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Test_results.jpg&amp;diff=9490"/>
		<updated>2009-05-21T03:53:40Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9489</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9489"/>
		<updated>2009-05-21T03:53:20Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Findings */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found on a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture difference between the groups.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study began. Students who reported to have lower test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no desin condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating.&lt;br /&gt;
&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:IPL_measures.jpg&amp;diff=9488</id>
		<title>File:IPL measures.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:IPL_measures.jpg&amp;diff=9488"/>
		<updated>2009-05-21T03:52:49Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9487</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9487"/>
		<updated>2009-05-21T03:52:12Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background and Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found on a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture difference between the groups.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study began. Students who reported to have lower test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no desin condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating.&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9486</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9486"/>
		<updated>2009-05-21T03:51:59Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Background and Significance */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found on a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture difference between the groups.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study began. Students who reported to have lower test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no desin condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating.&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:IPL_process.jpg&amp;diff=9485</id>
		<title>File:IPL process.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:IPL_process.jpg&amp;diff=9485"/>
		<updated>2009-05-21T03:51:39Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Contrasting_cases.jpg&amp;diff=9484</id>
		<title>File:Contrasting cases.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:Contrasting_cases.jpg&amp;diff=9484"/>
		<updated>2009-05-21T03:50:58Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9483</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=9483"/>
		<updated>2009-05-21T03:34:14Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007    || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 125 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;4&#039;&#039;&#039; || 4/2009       || 5/2009     || Steel Valley || 140 || 560 || Some of it in DataShop, the rest is getting there&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
Example for contrasting cases (topic - variability)&lt;br /&gt;
[[Image:contrasting cases.jpg]]&lt;br /&gt;
&lt;br /&gt;
The overall IPL process:&lt;br /&gt;
[[Image:IPL process.jpg]]&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# What is the overall effect of [[Invention task]]s on students domain knowledge, sense-making skills, and motivation, compared with [[direct instruction]]?&lt;br /&gt;
# What elements of invention contribute to that effect? What cognitive processes do they drive? In what ways does knowledge acquired following invention differ from knowledge acquired in direct instruction alone?&lt;br /&gt;
# Can the IPL process be scaled-up using technology?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Different studies manipulated different stages of the [[Invention task]]:&lt;br /&gt;
- Observation (a.k.a. comparative reasoning): comparing contrasting cases that vary along deep features, with regard to target concepts&lt;br /&gt;
- Generative reasoning: designing novel mathematical procedures to compare the contrasting cases with regard to the target concept&lt;br /&gt;
- Evaluation: Evaluation of the models&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge (in increasing &#039;distance&#039; from instruction): &lt;br /&gt;
* Normal measures&lt;br /&gt;
* Transfer measures&lt;br /&gt;
* New strategy items (with learning resource)&lt;br /&gt;
* New strategy items (without learning resource)&lt;br /&gt;
&lt;br /&gt;
[[Image:IPL measures.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Motivation and affect:&lt;br /&gt;
* Behavioral measure: % of students who kept working during breaks&lt;br /&gt;
* Self reports&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
One hypothesis argues that generative reasoning (in the form of symbolic invention) is necessary to improve encoding of subsequent instruction. First, generative reasoning facilitates a process in which students express their prior ideas, identify their shortcomings, and refine their mental models, thus enabling conceptual change (Smith, diSessa, &amp;amp; Roschelle, 1994). For example, the self-explanation literature shows that asking students to explain their errors facilitates conceptual shift (c.f., Siegler, 2002). &lt;br /&gt;
By attempting to invent and understand how different symbolic procedures succeed (or fail) to capture the differences between the contrasting cases, students also acquire a more cohesive and integrated understanding of the deep features of the domain. The importance of the symbolic nature of the process was demonstrated by Schwartz, Martin, and Pfaffman (2005), who asked students to reason verbally or mathematically about the balance beam problem. All students noticed the deep features of the balance beam domain - distance and weight. However, only students who reasoned mathematically were able to reconcile the two dimensions to a single representation. Interestingly, students’ thinking evolved even though their solutions were not complete, similar to the IPL effect. &lt;br /&gt;
Lastly, the generative reasoning process may help students understand the function of the different components of the procedure (for example, dividing by N controls for sample size). Thus, students may encode the subsequent instruction by function and not merely by procedure. Functional mental models were previously shown to lead to better adaptation of knowledge (Kieras &amp;amp; Bovair, 1984). Hatano and Inagaki (1986) describe a similar process in which developing mental models of how procedures interact with empirical knowledge helps students acquire conceptual understanding of the domain. &lt;br /&gt;
An alternative hypothesis argues that comparative reasoning is sufficient to achieve the learning benefits of IPL. According to this hypothesis, the benefits of invention stem from noticing and encoding the deep features of the domain. The comparative reasoning activity achieves that benefit by asking students to compare contrasting cases that differ with respect to their deep features. (Bransford &amp;amp; Schwartz, 2001). This qualitative analysis helps students set requirements for a valid model and thus acquire a better understanding (even if implicit) of the target concepts. Furthermore, according to this hypothesis, not only does the symbolic invention not contribute to future learning, it may waste students’ time (and thus reduce efficiency) or impose excessive cognitive load (Kirschner, Sweller &amp;amp; Clark, 2006).&lt;br /&gt;
A second research question addressed by our current study examines the effect of IPL on the flexibility of students’ knowledge. We follow a distinction made by McDaniel and Schlager (1990) between transfer problems that require the application of a learned strategy (conventional transfer problems) and transfer problems that require the generation of a new strategy. McDaniel and Schlager found that while discovery tasks improve students’ performance on the latter, they have no effect on conventional transfer problems. Schwartz and Martin (2004) add a twist to these results. They found that IPL improves students’ ability to solve new-strategy problems as long as they are provided with instruction on how to do so. To further investigate the effect of IPL on knowledge flexibility, we evaluate students’ ability to independently solve new-strategy problems and encode new-strategy instructions. Our hypothesis, as supported by McDaniel and Schlager (1990), is that students who are engaged in IPL will acquire more flexible knowledge and thus will demonstrate better performance on new-strategy items. At the same time they will not show better ability to use existing strategies in novel contexts (conventional transfer items). Furthermore, following the findings of Schwartz and Martin (2004), we hypothesize that the effect of IPL will be mainly on encoding new-strategy instructions. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
Domain knowledge:&lt;br /&gt;
[[Image:test results.jpg]]&lt;br /&gt;
* IPL students in advanced classes were more capable of solving new strategy items without learning resource. In fact, in the absence of a learning resource, direct instruction students performed at floor, while IPL students performed as well as with the source. &lt;br /&gt;
* This effect holds when controlling for simple domain knowledge (performance on normal items in the same test).&lt;br /&gt;
* This was found in multiple new-strategy items. However, all results were found on a single topic (central tendency and graphing). The single test item on the topic of variability failed to capture difference between the groups.&lt;br /&gt;
&lt;br /&gt;
Motivation:&lt;br /&gt;
* IPL students reported to have benefited more (F=3.3, p&amp;lt;.07)&lt;br /&gt;
* There was a significant interaction between condition and test anxiety. Text anxiety was assessed using the MSLQ (Pintrich 1999) before the study began. Students who reported to have lower test anxiety also reported to have benefited more from IPL instruction compared to high-anxiety students in the no desin condition. &lt;br /&gt;
* IPL students stayed more often in class to work during breaks (IPL: 16% No Design: 3%). &lt;br /&gt;
* Furthermore, they did so during invention activities and not show-and-practice activities, suggesting that it is the activities that are motivating.&lt;br /&gt;
[[Image:motivational results.jpg]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
Regarding our first research question, we found that generative reasoning (on top of comparative reasoning) had a positive effect on students’ ability to solve new-strategy problems with no learning resource in the advanced classes. At the same time, as hypothesized, it had a marginal effect on normal or conventional transfer items. These results are interesting especially since Full IPL students had approximately half the time for instruction and practice compared with their No Design counterparts.  &lt;br /&gt;
Regarding the second research question, which dealt with students’ knowledge flexibility, we found that in the advanced classes, students who designed novel methods during IPL were more capable of solving problems that require the use of novel strategies. This finding echoes the effect found by McDaniel and Schlager (1990). Interestingly, the effect of IPL on new-strategy items with no resources holds even when controlling for performance on normal items on the same test. Thus, this effect can probably not be attributed to more domain knowledge. Instead, it is likely the outcome of a different encoding of domain knowledge, in a manner that is not reflected in normal or transfer items.&lt;br /&gt;
On further scrutiny, students in both conditions did equally well on all tasks for which they received some form of instruction - whether in class (on normal and conventional transfer items) or embedded in the test (on new-strategy items with embedded learning resources). Regarding the latter, it seems that Full IPL students did not need the additional instruction whereas No Design students did not manage to solve the new-strategy problems without it. The performance of Full IPL students on new-strategy items remained virtually the same even in the absence of embedded instruction. This finding is at odds with earlier findings by Schwartz and Taylor (2004) who found that IPL improves students’ ability to encode future instruction but not solve novel problems without additional instruction. One explanation for the discrepancy between the studies is that the control group in Schwartz and Taylor (2004) did not engage in comparative reasoning. Therefore, it may be that the comparative reasoning stage helped students in our study to encode the novel instruction.&lt;br /&gt;
An alternative explanation examines these results in terms of ‘distance’ from original classroom instruction. It may be that the embedded instruction on the first topic in our study was close to the classroom material, and thus simple enough for all students to encode. In contrast, the embedded learning resource in the study described by Schwartz and Martin (2004) was sufficiently far from the classroom instruction. Therefore, only IPL students, who had acquired more flexible knowledge, could learn from it and apply the acquired knowledge successfully. This explanation further suggests that in the absence of additional instruction, only Full IPL students in our study could make the leap and answer the target new-strategy items. &lt;br /&gt;
While this argument explains performance on new-strategy items (with or without instruction) in terms of distance from classroom instruction, it does not explain what factors determine this distance. What makes some items ‘closer’ than others? What prepared Full IPL students for improved performance on some items but not on others? &lt;br /&gt;
Students may grapple with many challenges during the invention phase, many of which do not receive attention during classroom instruction. Students who invent are exposed to various challenges by virtue of attempting to invent general valid methods. We hypothesize that students use knowledge acquired during these experiences when later integrating new-strategy tasks into their existing body of knowledge. For example, the post-tests in this study included three new-strategy items, requiring the following new strategies: (1) comparing multiple datasets in a single representation; (2) representing data in unconventional intervals; and (3) finding the ratio between variability and average in order to account for differences in magnitude. These topics were not covered during classroom instruction. However, when we analyzed students’ inventions, we noticed that many inventions included features that could prepare students to expand the instructed knowledge and invent the first two strategies (see Figure 3). Subsequently, Full IPL students demonstrated better performance on the relevant new-strategy items. At the same time, no student attempted during invention to compare datasets with different magnitudes. Correspondingly, Full IPL students did not exhibit better performance on this new-strategy item.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Direct_instruction&amp;diff=7746</id>
		<title>Direct instruction</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Direct_instruction&amp;diff=7746"/>
		<updated>2008-04-09T18:41:06Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[category:glossary]]&lt;br /&gt;
[[category:IPL]]&lt;br /&gt;
[[category:Coordinative Learning]]&lt;br /&gt;
&lt;br /&gt;
- this is only a stub -&lt;br /&gt;
&lt;br /&gt;
Relevant knowledge explicitly delivered to students. Can include both procedural (cookbook) and conceptual components. &lt;br /&gt;
* Classroom instruction: mediated by the teacher&lt;br /&gt;
* Written instruction: Embedded within test or HW. Often includes a [[Worked examples]] component.&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7745</id>
		<title>IPL Instructional Principles</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7745"/>
		<updated>2008-04-09T18:40:53Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Brief statement of principle==&lt;br /&gt;
&lt;br /&gt;
Asking student to invent solutions to carefully designed challenges prior to receiving instruction can promote learning from subsequent instruction.&lt;br /&gt;
&lt;br /&gt;
==Description of principle==&lt;br /&gt;
===Operational definition===&lt;br /&gt;
Students should attempt to rank alternatives in an [[invention task]] by [[comparing sets]] of contrasting cases, before receiving direct instruction and practice.&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
The following example is an [[invention task]] using [[comparing sets|set comparison]] as a preparation for learning about variance&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
NASA is about to launch its latest weather satellite to space, with the goal of monitoring global warming. &lt;br /&gt;
In order to put a satellite in orbit, it is being sent to space on a rocket. The rocket releases the satellite when it reaches it highest peak.&lt;br /&gt;
NASA considers three rockets for this task: Fly-I, Orbitter, and Icarus. Each rocket was tested 4-5 times. At this point NASA wants to choose one rocket for further development.&lt;br /&gt;
&lt;br /&gt;
While the amount of fuel may not be the right one (that is, perhaps more or less fuel was needed), it was identical in all trials. This means that at this point NASA does not care about the absolute height, since the amount of fuel will need to be adjusted. But NASA does care about the ability to predict what height the rocket will reach –  that it, how consistent the rocket is. A consistent rocket arrives at the same height every time.&lt;br /&gt;
&lt;br /&gt;
Which rocket would you recommend?&lt;br /&gt;
 &lt;br /&gt;
The following graphs show the height the rockets reached during testing, relative to the desired height:&lt;br /&gt;
&lt;br /&gt;
[[Image:NASA task.jpg]]&lt;br /&gt;
&lt;br /&gt;
==Experimental support==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Laboratory experiment support===&lt;br /&gt;
===In vivo experiment support===&lt;br /&gt;
Schwartz &amp;amp; Martin (2004) found that IPL activities help students learn from subsequent instruction.&lt;br /&gt;
[[Roll IPL|Roll, Aleven, Schwartz and Koedinger]] are currently carrying out another classroom study on the topic.&lt;br /&gt;
&lt;br /&gt;
==Theoretical rationale== &lt;br /&gt;
(These entries should link to one or more [[:Category:Learning Processes|learning processes]].)&lt;br /&gt;
&lt;br /&gt;
==Conditions of application==&lt;br /&gt;
Several conditions are being investigated in the current [[Roll IPL|IPL]] study, namely:&lt;br /&gt;
- The need for design in the invention process&lt;br /&gt;
- The need for debugging in the invention process&lt;br /&gt;
&lt;br /&gt;
It is being hypothesizes, though has not been tested empirically yet, that the [[comparing sets|set comparison]] tasks should use contrasting cases and not isomorphic cases.&lt;br /&gt;
&lt;br /&gt;
==Caveats, limitations, open issues, or dissenting views==&lt;br /&gt;
Several researchers object any form of discovery activity, and argue that [[direct instruction]] is always the superior alternative (Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E., 2004)&lt;br /&gt;
&lt;br /&gt;
==Variations (descendants)==&lt;br /&gt;
==Generalizations (ascendants)==&lt;br /&gt;
==References==&lt;br /&gt;
* Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E. (2004). Why minimal guidance during instruction does not&lt;br /&gt;
work: An analysis of the failure of constructivist, Discovery, Problem-Based, Experiential, and&lt;br /&gt;
Inquiry-Based Teaching. Educational Psychologist, 41(2), 75–86.  &lt;br /&gt;
* Schwartz, D. L., &amp;amp; Martin, T. Inventing to Prepare for Future Learning:&lt;br /&gt;
The hidden efficiency of encouraging original student production in statistics&lt;br /&gt;
instruction. Cognition and Instruction, 22(2), 2004, pp. 129-184&lt;br /&gt;
[[Category:Glossary]]&lt;br /&gt;
[[Category:Instructional Principle]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=7744</id>
		<title>Roll IPL</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Roll_IPL&amp;diff=7744"/>
		<updated>2008-04-09T18:40:14Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Invention as preparation for learning ==&lt;br /&gt;
Ido Roll, Vincent Aleven, Bruce M. McLaren, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
Can invention activities prepare students to better learn from subsequent instruction, compared with instruction-and-practice only?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Ido Roll, Vincent Aleven, Dan Schwartz, Ken Koedinger&lt;br /&gt;
&lt;br /&gt;
Other Contributers: David Klahr&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 4/2007       || 4/2007     || North Hills || 20 || 40 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;2&#039;&#039;&#039; || 9/2007       || 12/2007     || Community Day || 4 || 48 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;3&#039;&#039;&#039; || 4/2008       || 5/2008     || Steel Valley || 150 || 900 || No, paper-and-pencil only&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
The [[assistance dilemma]] asks what form of assistance is most appropriate for different stages of learning. While direct instruction and practice have been shown to be efficient for novices, students often acquire shallow knowledge components and lack robust understanding. Some evidence suggests that invention using contrasting cases, prior to instruction and practice, can accelerate future learning, compared with instruction and practice alone (Schwartz &amp;amp; Martin, 2004). &lt;br /&gt;
The invention process as described in Schwartz &amp;amp; Martin (2004) includes the following stages: Design (of a mathematical model to solve a class of problems); calculation (of the solution based on the model); evaluation (of its correctness); and debugging (of the faulty model). Notably, most students fail to invent mathematically valid models, so the goal is not for students to discover the correct solution. At the same time, students do make models that capture deep features of the class of problems, which prepares them to learn and understand the significance of expert solutions for handling such situations. Following the invention, students receive instruction on the expert solution (that is, formulas) and practice it. This procedure is based on the hypothesis that students’ own inventions, together with subsequent instruction, are sources for coordinative learning. By attempting to create a model that correctly distinguishes the “contrasting cases” (carefully selected instances within a class of problems) students notice (and to some degree invent) the problem features that an adequate model must take into account, and they attend to them during subsequent instruction. However, alternative explanations for the effectiveness of the IPL process are possible, with different instructional implications. A “debugging hypothesis” suggests that evaluation and debugging of pre-designed models are sufficient to promote future learning by directing students’ attention to the short-comings of the designed models, and thus to the deep features of the domain. Alternatively, an “unfinished goals” hypothesis suggests that the effect is caused by students reaching impasses during invention. According to this hypothesis, calculation-evaluation are sufficient for preparing for future learning.&lt;br /&gt;
We propose to investigate this in a series of ablation studies with the goal of better defining the invention process and identifying the cognitive processes involved. This includes a combination of in-vivo and lab studies within the Algebra LearnLab, contributing to the Coordinative Learning theoretical framework. Following the ablation studies we plan to implement the procedure in a Cognitive Tutor, which will be evaluated in a lab study. This will allow us to better operationalize the process, do a micro-genetic analysis of it, and identify productive patterns of learning trajectories using log mining. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
One of the main challenges of education is to help students reach meaningful and robust learning. The assistance dilemma raises the question of what form (and ‘amount’) of assistance are most effective with different learners in different stages of the learning process (Koedinger &amp;amp; Aleven, in press). Instruction followed by practice is known to be very efficient for teaching novices (e.g., Koedinger, Anderson, Hadley &amp;amp; Mark, 1997); yet, students often acquire shallow procedural skills, and fail to acquire conceptual understanding (Aleven &amp;amp; Koedinger, 2002). This can be attributed, at least in part, to students using superficial features and not encoding the deep features of the domain (Chi, Feltovich &amp;amp; Glaser, 1981). &lt;br /&gt;
One approach to getting students to attend and encode the deep features is to add an invention phase prior to instruction. Invention as preparation for leaning (IPL) was shown to help students better cope with novel situations that require learning (Schwartz &amp;amp; Martin, 2004; Sears, 2006). In this process students are presented with a dilemma in the form of contrasting cases, and attempt to invent a mathematical model to resolve this dilemma. For example, Figure 1 shows four possible pitching machines. Students are asked to invent a method that will allow them to pick the most reliable machine. The concept of contrasting cases comes from the perceptual learning literature, since these cases, when appropriately designed, emphasize differences in the deep structure of the examples (Gibson &amp;amp; Gibson, 1955). The invention process includes designing a model, applying it to the given set of contrasting cases, evaluating the result, and debugging the model. This iterative process is very similar to the debugging process as described by Klahr and Carver (1988; Figure 2). Unlike other inquiry-based manipulations (cf. Lehrer et al., 2001; de Jong &amp;amp; van Joolingen, 1998), the goal of the IPL process is not for students to discover the correct model, but to prepare them for subsequent instruction. During the instruction students share their models, critic their peers’ models, and learn the expert solutions (A similar classroom critic process was shown to be effective by White &amp;amp; Frederiksen, 1998). Preparation for learning from the instruction is evaluated using accelerated future learning assessment. The accelerated future learning assessment includes an embedded instruction in the test in the form of solved example. Schwartz (2004) found that only students who invented prior to the test were able to take advantage of that instruction in order to solve novel problem, while students who practiced a given visual method prior to the test did not take advantage of the embedded learning resource and thus could not solve the target problem. This shows that the IPL process has a positive effect on students’ ability to independently learn from the solved examples. In the case of the contrasting cases given in Figure 1, subsequent instruction will introduce the students with the notion (and formulas) of variance.&lt;br /&gt;
While the invention group was superior to instruction-and-practice group on accelerated future learning measure, there was no direct comparison of normal or transfer measures between invention and instruction-and-practice conditions (though invention students showed pre-to-post gains, and were shown to outperformed college students). Also, it is not yet clear how robust this pedagogy is and what its key features are. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
See [[:Category:IPL|IPL Glossary]]&lt;br /&gt;
&lt;br /&gt;
=== Research questions ===&lt;br /&gt;
&lt;br /&gt;
# Do [[Invention task]]s prepare students to better learn from subsequent [[direct instruction]]?&lt;br /&gt;
# What are several of the cognitive processes that drive that effect?&lt;br /&gt;
# What properties of instruction support these processes?&lt;br /&gt;
&lt;br /&gt;
In addition, the project makes the following contributions:&lt;br /&gt;
# It compares different measures of [[robust learning]], in order to understand what aspect of knowledge can be assessed using what type of measure.&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
The existence of structured [[Invention task]]s, prior to and in addition to conventional [[Direct instruction]] and [[Practice]].&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
* Normal (procedural) measures&lt;br /&gt;
* Transfer (conceptual) measures&lt;br /&gt;
* Long-term retention measures&lt;br /&gt;
* Near (pre-motivated) future learning measures&lt;br /&gt;
* Far (un-motivated) future learning measures&lt;br /&gt;
* Motivation and affect questionnaire&lt;br /&gt;
* Invention skills measurse&lt;br /&gt;
* Debugging skills measures&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
* Invention activities help students acquire more flexible and adaptive knowledge. The further the task is from the original domain, the more benefit will be shown for the invention activities&lt;br /&gt;
(Normal &amp;lt; Transfer &amp;lt; Near future learning &amp;lt; Far future learning).&lt;br /&gt;
* Invention students will acquire better debugging skills, but no better invention skills.&lt;br /&gt;
* Invention students will be more motivated&lt;br /&gt;
&lt;br /&gt;
==== Instructional Principles ====&lt;br /&gt;
See [[IPL Instructional Principles]]&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
none yet&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Connections===&lt;br /&gt;
&lt;br /&gt;
* Tim Nokes&#039;s study&lt;br /&gt;
&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Composition_Effect_Kao_Roll_-_old,_please_keep&amp;diff=7743</id>
		<title>Composition Effect Kao Roll - old, please keep</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Composition_Effect_Kao_Roll_-_old,_please_keep&amp;diff=7743"/>
		<updated>2008-04-09T18:40:01Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== The Composition Effect - What is the Source of Difficulty in Problems which Require Application of Several Skills?  ==&lt;br /&gt;
Ido Roll, Yvonne Kao, Kenneth E. Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
Composite problems, i.e., problems that require the application of more than one skill, are shown to be harder than a collection of single-step problems requiring the application of the same set of skills.&lt;br /&gt;
A common explanation is that the composition itself imposes another difficulty level. However, an alternative explanation suggests that the composition makes the application of the individual skills harder. According to that explanation, poor feature validity and shallow domain rules make it harder on students to apply the individual skills correctly in the cluttered environment of composite problems, regardless the need to apply additional skills.&lt;br /&gt;
Our study investigates these issues in two ways: (1) Having a DFA which evaluates performance on composite problems and single-step problems using the same data, and (2) By evaluating the effect of instruction targeting a common misconception in single-step problems on composite problems.&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
- Composite problems: Problems which require the application of several skills, such as solving 3x+6=0 for x.&lt;br /&gt;
&lt;br /&gt;
- Single-step problems: Problems which require the application of a single skill, such as y+6=0 or 3x=-6&lt;br /&gt;
&lt;br /&gt;
- DFA (Difficulty Factor Analysis): A test that includes pairs of items varying along one dimension only. It allows to evaluate the difficulty level of the single dimensions along which the problems differ.&lt;br /&gt;
&lt;br /&gt;
- The Composition Effect: The effect according to which composite problems are harder than a set of single-step problems using the same skills.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
&lt;br /&gt;
What is the main source of difficulty in composite problems?&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
Significance: This study can shed some light on the source of difficulty on composite problems, and thus can inform the design of relevant instruction and remediation. &lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
An instruction in the form of solved-example, targeting a common misconception - identifying base and hight in a cluttered environment. &lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
Three tests are used in the study:&lt;br /&gt;
- Pre-test: given before all instruciton&lt;br /&gt;
- Mid-test: given after students learned about single-step problems and before composite problems&lt;br /&gt;
- Post-test: after students have learned and practice all material. &lt;br /&gt;
&lt;br /&gt;
The tests include the following items. Some of which are [[transfer]] items, evaluating robust learning, since they require and adaptive application of the knowledge learned and practiced in class.&lt;br /&gt;
&lt;br /&gt;
* Simple diagram:&lt;br /&gt;
*# no distractors, canonical orientation&lt;br /&gt;
*# distractors,    canonical orientation&lt;br /&gt;
*# no distractors, tilted orientation&lt;br /&gt;
*# distractors,    tilted orientation&lt;br /&gt;
* Complex diagram:&lt;br /&gt;
*# Given complex diagram, ask for skill A&lt;br /&gt;
*# Given complex diagram, ask for skill B&lt;br /&gt;
*# Given steps A and B,   ask for skills C (which requires A and B)&lt;br /&gt;
*# Given complex diagram, ask for C (which requires A and B)&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
# The difficulty level in composite problems originates in poor feature validity of the single skills.&lt;br /&gt;
#* An operationalized version of this hypothesis is that performance on items of type &amp;quot;Find measure C based on diagram&amp;quot; will be equivalent to the multiplication of success rate on items &amp;quot;Find measure C based on items A and B&amp;quot;, &amp;quot;Find measure A based on diagram&amp;quot;, and &amp;quot;Find measure B based on diagram&amp;quot;.&lt;br /&gt;
# Tilted orientation and distractors still imposes difficulty even once students mastered the skills &lt;br /&gt;
# Direct instruction during the test which targets these misconceptions in the form of a solved example can improve performance&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
None yet.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Descendents ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
Bransford (2000). How people learn: brain, mind, experience, and school National Academy Press.&lt;br /&gt;
Heffernan, N.T., &amp;amp; Koedinger, K.R. (1997) The composition effect in symbolizing: The role of symbol production vs. text comprehension. in proceedings of Nineteenth Annual Conference of the Cognitive Science Society, 307-12. Hillsdale, NJ: Erlbaum.&lt;br /&gt;
Koedinger, K.R., &amp;amp; Anderson, J.R. (1997). Intelligent Tutoring Goes to School in the Big City. International Journal of Artificial Intelligence in Education 8, 30-43&lt;br /&gt;
Koedinger, K. R. &amp;amp; Cross, K. (2000).  Making informed decisions in educational technology design: Toward meta-cognitive support in a cognitive tutor for geometry.  Presented at the annual meeting of the American Educational Research Association, New Orleans, LA.  &lt;br /&gt;
Owen, E., &amp;amp; Sweller, J. (1985).  What do students learn while solving mathematics problems?  Journal of Educational Psychology, 77, 272-284.&lt;br /&gt;
Simon, H. A., &amp;amp; Lea, G. (1974).  Problem solving and rule induction: A unified view.  In L. W. Gregg (Ed.), Knowledge and cognition. Hillsdale, NJ: Erlbaum.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Help_Lite_(Aleven,_Roll)&amp;diff=7742</id>
		<title>Help Lite (Aleven, Roll)</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Help_Lite_(Aleven,_Roll)&amp;diff=7742"/>
		<updated>2008-04-09T18:39:47Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content ==&lt;br /&gt;
Vincent Aleven, Ido Roll, Kenneth Koedinger&lt;br /&gt;
&lt;br /&gt;
=== Meta-data ===&lt;br /&gt;
&lt;br /&gt;
PI&#039;s: Vincent Aleven, Ido Roll&lt;br /&gt;
&lt;br /&gt;
Other Contributers: Ron Salden (post-doc)&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Study # !! Start Date !! End Date !! LearnLab Site !!  # of Students !! Total Participant Hours !! DataShop? &lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;1&#039;&#039;&#039; || 5/2006     || 5/2006   || Wilkinsburg (Geometry)          || 40 || 90 || Yes&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
&lt;br /&gt;
This in vivo experiment compared the effectiveness of two styles of hint sequences during tutored problem solving. The study was carried out in the Geometry LearnLab.&lt;br /&gt;
Two conditions were compared, each working with its own tutor version. The tutor versions differed only with respect to the content of the hint sequences. A key difference between the hint sequences was that the number of hint levels was reduced from about 7 in a typical hit sequence to 2 or 3. This was achieved by removing hints that merely reminded students of their current goal within the problem, by removing hints that encouraged students to try to address their question by using the Glossary, and by being more concise in explaining how a theorem or definition could be explained. At the same time, conceptual content was added, in the form of explanations of geometry terms.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
* [[help-seeking behavior]]&lt;br /&gt;
&lt;br /&gt;
* [[Clicking through hints]]&lt;br /&gt;
&lt;br /&gt;
* [[Hint sequence]]&lt;br /&gt;
&lt;br /&gt;
* [[Help avoidance]]&lt;br /&gt;
&lt;br /&gt;
* [[Help abuse]]&lt;br /&gt;
&lt;br /&gt;
* [[Bottom out hint]]&lt;br /&gt;
&lt;br /&gt;
* [[Metacognition]]&lt;br /&gt;
&lt;br /&gt;
* [[Game the system]] &lt;br /&gt;
&lt;br /&gt;
* [[Cognitive tutor]]&lt;br /&gt;
&lt;br /&gt;
=== Research question ===&lt;br /&gt;
&lt;br /&gt;
How is robust learning affected by shorter hint sequences with richer conceptual content?&lt;br /&gt;
&lt;br /&gt;
This was not an independent study, but part of the main [[The Help Tutor Roll Aleven McLaren | Help Seeking]] study.&lt;br /&gt;
&lt;br /&gt;
=== Background and Significance ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
Number and type of hint levels within the [[hint sequence]]: &lt;br /&gt;
* Control: Standard Cognitive Tutor hints; Including 7 levels of hints of different types: containing either domain knowledge or metacognitive hints (such as &#039;search the glossary for ...&#039;)&lt;br /&gt;
* Experimental condition: Included only 2-3 levels of hints, each of which includes only domain knowledge.&lt;br /&gt;
&lt;br /&gt;
=== Dependent variables ===&lt;br /&gt;
&lt;br /&gt;
The study uses two levels of dependent measures:&lt;br /&gt;
&lt;br /&gt;
Assessing Help Seeking behavior:&lt;br /&gt;
* Analyzing log-files against a model of ideal help-seeking behavior&lt;br /&gt;
&lt;br /&gt;
Assessing domain learning&lt;br /&gt;
* Learning curves while using the tutor&lt;br /&gt;
* % correct on attempts following hint requests&lt;br /&gt;
&lt;br /&gt;
Due to technical and administrative errors, some of the tests are lost, and others cannot be attributed to conditions. As a result, no pre- and post-test measures can be used.&lt;br /&gt;
&lt;br /&gt;
=== Hypothesis ===&lt;br /&gt;
&lt;br /&gt;
Students pay more attention to short hint sequences as they feel they are more helpful and easier to understand. Thus, the shorter hint sequences reduce hint abuse, such as students’ clicking through hints until they get the answer, without paying attention to why the answer is what it is. The richer conceptual content helps them to make sense out of the tutor’s hints, reducing implicit learning and also making students more likely to attend to the hints. Thus, there are two reasons why the new hints result in better sense making and less implicit learning.&lt;br /&gt;
&lt;br /&gt;
=== Findings ===&lt;br /&gt;
&lt;br /&gt;
None. Errors in data logging and data collection do not allow for an extensive analysis.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
&lt;br /&gt;
Having informative, relevant and on-time hints provides the student an effective learning trajectory when learning-by-doing becomes to difficult.&lt;br /&gt;
The original help sequence require the learner for more responsibility - identify relevant hints, search the glossary, etc. These activities require [[cognitive load]]. However, the updated [[hint sequence]] offer relevant instruction when required, and low on extraneous [[cognitive load]].&lt;br /&gt;
&lt;br /&gt;
=== Descendents ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Annotated bibliography ===&lt;br /&gt;
&lt;br /&gt;
Aleven, V., &amp;amp; Koedinger, K.R. (2000) Limitations of student control: Do students know when they need help? in proceedings of 5th International Conference on Intelligent Tutoring Systems, 292-303. Berlin: Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2004) Toward tutoring help seeking - Applying cognitive modeling to meta-cognitive skills . in proceedings of 7th Int C on Intelligent Tutoring Systems, 227-39. Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
Aleven, V., Roll, I., McLaren, B.M., Ryu, E.J., &amp;amp; Koedinger, K.R. (2005) An architecture to combine meta-cognitive and cognitive tutoring: Pilot testing the Help Tutor. in proceedings of 12th Int C on Artificial Intelligence in Education, Amsterdam, The Netherlands: IOS press.&lt;br /&gt;
&lt;br /&gt;
Aleven, V., McLaren, B.M., Roll, I., &amp;amp; Koedinger, K.R. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. Int J of Artificial Intelligence in Education(16), 101-30&lt;br /&gt;
&lt;br /&gt;
Roll, I., Aleven, V., &amp;amp; Koedinger, K.R. (2004) Promoting Effective Help-Seeking Behavior through Declarative Instruction. in proceedings of 7th Int C on Intelligent Tutoring Systems, 857-9. Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
Roll, I., Baker, R.S., Aleven, V., McLaren, B.M., &amp;amp; Koedinger, K.R. (2005) Modeling Students’ Metacognitive Errors in Two Intelligent Tutoring Systems. in L. Ardissono,  (Eds.), in proceedings of User Modeling 2005, 379-88. Berlin: Springer-Verlag.&lt;br /&gt;
&lt;br /&gt;
Roll, I., Ryu, E., Sewall, J., Leber, B., McLaren, B.M., Aleven, V., &amp;amp; Koedinger, K.R. (2006) Towards Teaching Metacognition: Supporting Spontaneous Self-Assessment. in proceedings of 8th Int C on Intelligent Tutoring Systems, 738-40. Berlin: Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
Roll, I., Aleven, V., McLaren, B.M., Ryu, E., Baker, R.S., &amp;amp; Koedinger, K.R. (2006) The Help Tutor: Does Metacognitive Feedback Improves Students&#039; Help-Seeking Actions, Skills and Learning? in proceedings of 8th Int C on Intelligent Tutoring Systems, 360-9. Berlin: Springer Verlag.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Empirical Study]]&lt;br /&gt;
[[Category:Protected]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7502</id>
		<title>IPL Instructional Principles</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7502"/>
		<updated>2008-03-25T16:57:56Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Brief statement of principle==&lt;br /&gt;
&lt;br /&gt;
Asking student to invent solutions to carefully designed challenges prior to receiving instruction can promote learning from subsequent instruction.&lt;br /&gt;
&lt;br /&gt;
==Description of principle==&lt;br /&gt;
===Operational definition===&lt;br /&gt;
Students should attempt to rank alternatives in an [[invention task]] by [[comparing sets]] of contrasting cases, before receiving direct instruction and practice.&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
The following example is an [[invention task]] using [[comparing sets|set comparison]] as a preparation for learning about variance&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
NASA is about to launch its latest weather satellite to space, with the goal of monitoring global warming. &lt;br /&gt;
In order to put a satellite in orbit, it is being sent to space on a rocket. The rocket releases the satellite when it reaches it highest peak.&lt;br /&gt;
NASA considers three rockets for this task: Fly-I, Orbitter, and Icarus. Each rocket was tested 4-5 times. At this point NASA wants to choose one rocket for further development.&lt;br /&gt;
&lt;br /&gt;
While the amount of fuel may not be the right one (that is, perhaps more or less fuel was needed), it was identical in all trials. This means that at this point NASA does not care about the absolute height, since the amount of fuel will need to be adjusted. But NASA does care about the ability to predict what height the rocket will reach –  that it, how consistent the rocket is. A consistent rocket arrives at the same height every time.&lt;br /&gt;
&lt;br /&gt;
Which rocket would you recommend?&lt;br /&gt;
 &lt;br /&gt;
The following graphs show the height the rockets reached during testing, relative to the desired height:&lt;br /&gt;
&lt;br /&gt;
[[Image:NASA task.jpg]]&lt;br /&gt;
&lt;br /&gt;
==Experimental support==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Laboratory experiment support===&lt;br /&gt;
===In vivo experiment support===&lt;br /&gt;
Schwartz &amp;amp; Martin (2004) found that IPL activities help students learn from subsequent instruction.&lt;br /&gt;
[[Roll IPL|Roll, Aleven, Schwartz and Koedinger]] are currently carrying out another classroom study on the topic.&lt;br /&gt;
&lt;br /&gt;
==Theoretical rationale== &lt;br /&gt;
(These entries should link to one or more [[:Category:Learning Processes|learning processes]].)&lt;br /&gt;
&lt;br /&gt;
==Conditions of application==&lt;br /&gt;
Several conditions are being investigated in the current [[Roll IPL|IPL]] study, namely:&lt;br /&gt;
- The need for design in the invention process&lt;br /&gt;
- The need for debugging in the invention process&lt;br /&gt;
&lt;br /&gt;
It is being hypothesizes, though has not been tested empirically yet, that the [[comparing sets|set comparison]] tasks should use contrasting cases and not isomorphic cases.&lt;br /&gt;
&lt;br /&gt;
==Caveats, limitations, open issues, or dissenting views==&lt;br /&gt;
Several researchers object any form of discovery activity, and argue that [[direct instruction]] is always the superior alternative (Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E., 2004)&lt;br /&gt;
&lt;br /&gt;
==Variations (descendants)==&lt;br /&gt;
==Generalizations (ascendants)==&lt;br /&gt;
==References==&lt;br /&gt;
* Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E. (2004). Why minimal guidance during instruction does not&lt;br /&gt;
work: An analysis of the failure of constructivist, Discovery, Problem-Based, Experiential, and&lt;br /&gt;
Inquiry-Based Teaching. Educational Psychologist, 41(2), 75–86.  &lt;br /&gt;
* Schwartz, D. L., &amp;amp; Martin, T. Inventing to Prepare for Future Learning:&lt;br /&gt;
The hidden efficiency of encouraging original student production in statistics&lt;br /&gt;
instruction. Cognition and Instruction, 22(2), 2004, pp. 129-184&lt;br /&gt;
[[Category:Glossary]]&lt;br /&gt;
[[Category:Instructional Principle]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7500</id>
		<title>IPL Instructional Principles</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=IPL_Instructional_Principles&amp;diff=7500"/>
		<updated>2008-03-25T16:57:31Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: /* Examples */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Brief statement of principle==&lt;br /&gt;
&lt;br /&gt;
Asking student to invent solutions to carefully designed challenges prior to receiving instruction can promote learning from subsequent instruction.&lt;br /&gt;
&lt;br /&gt;
==Description of principle==&lt;br /&gt;
===Operational definition===&lt;br /&gt;
Students should attempt to rank alternatives in an [[invention task]] by [[comparing sets]] of contrasting cases, before receiving direct instruction and practice.&lt;br /&gt;
&lt;br /&gt;
===Examples===&lt;br /&gt;
The following example is an [[invention task]] using [[comparing sets|set comparison]] as a preparation for learning about variance&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
 NASA is about to launch its latest weather satellite to space, with the goal of monitoring global warming. &lt;br /&gt;
 In order to put a satellite in orbit, it is being sent to space on a rocket. The rocket releases the satellite when it reaches it highest peak.&lt;br /&gt;
 NASA considers three rockets for this task: Fly-I, Orbitter, and Icarus. Each rocket was tested 4-5 times. At this point NASA wants to choose one rocket for further development.&lt;br /&gt;
 &lt;br /&gt;
 While the amount of fuel may not be the right one (that is, perhaps more or less fuel was needed), it was identical in all trials. This means that at this point NASA does not care about the absolute height, since the amount of fuel will need to be adjusted. But NASA does care about the ability to predict what height the rocket will reach –  that it, how consistent the rocket is. A consistent rocket arrives at the same height every time.&lt;br /&gt;
 &lt;br /&gt;
 Which rocket would you recommend?&lt;br /&gt;
 &lt;br /&gt;
 The following graphs show the height the rockets reached during testing, relative to the desired height:&lt;br /&gt;
 &lt;br /&gt;
 [[Image:NASA task.jpg]]&lt;br /&gt;
&lt;br /&gt;
==Experimental support==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Laboratory experiment support===&lt;br /&gt;
===In vivo experiment support===&lt;br /&gt;
Schwartz &amp;amp; Martin (2004) found that IPL activities help students learn from subsequent instruction.&lt;br /&gt;
[[Roll IPL|Roll, Aleven, Schwartz and Koedinger]] are currently carrying out another classroom study on the topic.&lt;br /&gt;
&lt;br /&gt;
==Theoretical rationale== &lt;br /&gt;
(These entries should link to one or more [[:Category:Learning Processes|learning processes]].)&lt;br /&gt;
&lt;br /&gt;
==Conditions of application==&lt;br /&gt;
Several conditions are being investigated in the current [[Roll IPL|IPL]] study, namely:&lt;br /&gt;
- The need for design in the invention process&lt;br /&gt;
- The need for debugging in the invention process&lt;br /&gt;
&lt;br /&gt;
It is being hypothesizes, though has not been tested empirically yet, that the [[comparing sets|set comparison]] tasks should use contrasting cases and not isomorphic cases.&lt;br /&gt;
&lt;br /&gt;
==Caveats, limitations, open issues, or dissenting views==&lt;br /&gt;
Several researchers object any form of discovery activity, and argue that [[direct instruction]] is always the superior alternative (Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E., 2004)&lt;br /&gt;
&lt;br /&gt;
==Variations (descendants)==&lt;br /&gt;
==Generalizations (ascendants)==&lt;br /&gt;
==References==&lt;br /&gt;
* Kirschner, P.A., Sweller, J., &amp;amp; Clark, J.E. (2004). Why minimal guidance during instruction does not&lt;br /&gt;
work: An analysis of the failure of constructivist, Discovery, Problem-Based, Experiential, and&lt;br /&gt;
Inquiry-Based Teaching. Educational Psychologist, 41(2), 75–86.  &lt;br /&gt;
* Schwartz, D. L., &amp;amp; Martin, T. Inventing to Prepare for Future Learning:&lt;br /&gt;
The hidden efficiency of encouraging original student production in statistics&lt;br /&gt;
instruction. Cognition and Instruction, 22(2), 2004, pp. 129-184&lt;br /&gt;
[[Category:Glossary]]&lt;br /&gt;
[[Category:Instructional Principle]]&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=File:NASA_task.jpg&amp;diff=7498</id>
		<title>File:NASA task.jpg</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=File:NASA_task.jpg&amp;diff=7498"/>
		<updated>2008-03-25T16:56:39Z</updated>

		<summary type="html">&lt;p&gt;Idoroll: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Idoroll</name></author>
	</entry>
</feed>