https://learnlab.org/research/wiki/api.php?action=feedcontributions&user=Bmclaren&feedformat=atomLearnLab - User contributions [en]2024-03-29T01:31:56ZUser contributionsMediaWiki 1.31.12https://learnlab.org/wiki/index.php?title=Grading&diff=10853Grading2010-08-05T16:09:14Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one can grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select and that the DataShop implements. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
* One huge advantage of providing several methods, available in one place (namely, the DataShop) is that the researcher could then rapidly experiment with the results of different grading approaches, ones they may not have tried otherwise, and then decide to grade their online tests differently than originally conceived. I doubt that this would be done with off-line, ad hoc grading, as things stand now, given the effort to implement different grading approaches for each project separately. I certainly would use such a facility. -- Bruce 8/5/2010<br />
* And here is another advantage: Recently, I have had several researchers ask me for the pre-post test scores for my stoich studies. Since I have always calculated these scores separately and off-line, they are not available in the DataShop. If the DataShop provided a grading mechanism, the scores for all experiments that have online tests could be directly and easily stored with the original data, thus facilitating future research on study data. -- Bruce 8/5/2010<br />
* Grading approaches I've used on stoich:<br />
- Average all of the steps in the test (pre or post), taking the last submitted value for each step as the "final answer"<br />
- Weighted average of all problems (1 problem = X steps), taking the last submitted value for each step as the "final answer"<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Grading&diff=10848Grading2010-08-05T05:53:27Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one can grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select and that the DataShop implements. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
* One huge advantage of providing several methods, available in one place (namely, the DataShop) is that the researcher could then rapidly experiment with the results of different grading approaches, ones they may not have tried otherwise, and then decide to grade their online tests differently than originally conceived. I doubt that this would be done with off-line, ad hoc grading, as things stand now, given the effort to implement different grading approaches for each project separately. I certainly would use such a facility. -- Bruce 8/5/2010<br />
* And here is another advantage: Recently, I have had several researchers ask me for the pre-post test scores for my stoich studies. Since I have always calculated these scores separately and off-line, they are not available in the DataShop. If the DataShop provided a grading mechanism, the scores for all experiments that have online tests could be directly and easily stored with the original data, thus facilitating future research on study data. -- Bruce 8/5/2010<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Grading&diff=10847Grading2010-08-05T05:51:18Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one can grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select and that the DataShop implements. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
* One huge advantage of providing several methods, available in one place (namely, the DataShop) is that the researcher could rapidly experiment with the results of different grading approaches, ones they may not have tried otherwise, and then decide to grade their online tests differently than originally conceived. I doubt that this would be done with off-line, ad hoc grading, as things stand now. I certainly would use such a facility. -- Bruce 8/5/2010<br />
* And here is another advantage: Recently, I have had several researchers ask me for my pre-post test scores for the stoich studies. Since I have always calculated these scores separately and off-line, they are not available in the DataShop. If the DataShop provided a grading mechanism, the scores for all experiments that have online tests could be easily stored with the original data, thus facilitating future research on studies. -- Bruce 8/5/2010<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Grading&diff=10846Grading2010-08-05T05:46:25Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one can grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. So I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select and that the DataShop implements. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
* One huge advantage of providing several methods, available in one place (namely, the DataShop) is that the researcher could then rapidly experiment with the results of different grading approaches, ones they may not have tried otherwise, and then decide to grade their online tests differently than originally conceived. -- Bruce 8/5/2010<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Grading&diff=10845Grading2010-08-05T05:43:09Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one can grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. So I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select and that the DataShop implements. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Grading&diff=10844Grading2010-08-05T05:42:14Z<p>Bmclaren: /* Grading */</p>
<hr />
<div>'''Status: Prioritization Needed'''<br />
<br />
== User Story ==<br />
<br />
As a researcher, I want to DataShop to provide some report or extra data, so that I can grade online tests and have pre/post test data associated with my study.<br />
<br />
== Notes/Comments ==<br />
The following notes come from an email thread between Bruce McLaren and Vincent Aleven. August 2010. There is also a related request from Ruth Wylie.<br />
<br />
==== Provide a "last attempt" field" in transaction export ====<br />
* Vincent suggested that DataShop simply put a new field in the transaction export to indicate which transaction is the last transaction for a given step. The researcher would still need to do some work in Excel to produce the actual grades for the students. But this seems relatively simple for DataShop to do.<br />
* "Even if the DataShop is kind enough to implement the "test score" facility, I would hope it will also support the "last attempt" field, which would appear to be even easier to implement (since identifying the last attempt is a step in computing the test score) and provide great bang-for-the-buck. This field is still very useful for me and the projects that I have been involved with. For example, it is useful when the standard "test score" analysis is not quite what you need (e.g., when test items depend on each other in certain ways, or when you want more fine-grained measures of test performance, such as transfer v. reproduction - this is very very very typical and endorsed if not required by the PSLC theoretical framework). Further, given that we typically analyze test scores in Excel anyway, and there are typically so many ways to look at test scores, it seems important to offer flexibility (meaning, the "last attempt" field)." -- Vincent email 8/3/2010<br />
* Related Request: Last Attempts - Test Item :: As a researcher (or educator), I want to see data for last attempts only so I can determine correctness on test items. -- Ruth Wylie, July 3, 2008 <br />
<br />
==== Provide two new fields in the student-step rollup ====<br />
* DataShop can identify all problems where the tutor_flag field equals, 'test', 'pre-test' or 'post-test'. Then for each step in that problem, find the last transaction for that step and the outcome (CORRECT vs. INCORRECT) and then put two new columns in the student-step rollup. One column indicating whether the step is part of a test problem. One column indicating whether the student was right or wrong on the last transaction. -- [[User:Alida|Alida]] 09:52, 4 August 2010 (EDT)<br />
<br />
==== Grading ====<br />
* "I think Vincent's idea of adding a new field indicating whether an answer is the last one provided for a particular step is a great start. However, since it still requires (potentially error-prone) manual manipulation in Excel after exporting the data, why not take it a step further and also provide a DataShop capability to generate a "test score" (and a data file to give to teachers -- that would really help me) from a given set of selections?" -- Bruce email 8/3/2010<br />
* Producing a grade that teachers could use seems difficult. How would DataShop know how many steps are part of each problem and how many problems are part of a test? If a student skips a step, and there are no logs for that step, then that student would have less steps than another student. -- [[User:Alida|Alida]] 09:47, 4 August 2010 (EDT)<br />
* It is true that there are a variety of ways one could grade an online test with DataShop log data. However, I think there are a core of perhaps 4 or 5 different ways it might be done, none of which are particularly complex to implement. So I'd suggest a DataShop interface that allows a few variations on grading that the researcher can select. (And there would still be the possibility of simply using the extra field Vincent has requested, if none of the implemented approaches works.) -- Bruce 8/5/2010<br />
<br />
<br><br />
----<br />
See complete [[DataShop Feature Wish List]].<br><br />
See complete [[Collected User Requests]].<br />
[[Category:Protected]]<br />
[[Category:DataShop]]</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10116McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:55:08Z<p>Bmclaren: /* Annotated Bibliography */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one instance of the [[assistance dilemma]], an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called [[assistance dilemma]] is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''Near-transer posttest'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''Conceptual-understanding posttest'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
The differences in conceptual learning were larger and significant for stronger students than weaker students compared to other conditions. We have two possible interpretations for this finding. First, stronger students are likely to have a higher metacognitive awareness than weaker students and thus may have used the available hints and feedback of the Tutored Condition more effectively. Second, stronger students, who tend to be more independent learners, may have simply been more motivated to learn since they were allowed to make their own decisions and construct their own knowledge, asking for help only when they really felt they needed it. <br />
<br />
Finally, why were differences only observed for conceptual questions? This can be explained by the nature of the camping problem, which is focused on conceptual aspects of thermo chemistry. That is, the camping problem, and use of the VLab to solve it, focused students on running experiments to learn concepts, rather than procedures or calculations. The procedure and calculations necessary to solve the <br />
near-transfer problems were done outside of the VLab in all conditions; thus, we would not (necessarily) expect that any of the conditions would do better than the others in the near-transfer part of the posttest.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg. [[http://www.learnlab.org/research/wiki/index.php/Image:BorekEtAl-AssistanceForDiscoveryTasks-ECTEL2009.pdf pdf file]]<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=File:BorekEtAl-AssistanceForDiscoveryTasks-ECTEL2009.pdf&diff=10115File:BorekEtAl-AssistanceForDiscoveryTasks-ECTEL2009.pdf2009-11-20T23:54:10Z<p>Bmclaren: </p>
<hr />
<div></div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10114McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:49:40Z<p>Bmclaren: /* Explanation */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one instance of the [[assistance dilemma]], an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called [[assistance dilemma]] is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''Near-transer posttest'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''Conceptual-understanding posttest'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
The differences in conceptual learning were larger and significant for stronger students than weaker students compared to other conditions. We have two possible interpretations for this finding. First, stronger students are likely to have a higher metacognitive awareness than weaker students and thus may have used the available hints and feedback of the Tutored Condition more effectively. Second, stronger students, who tend to be more independent learners, may have simply been more motivated to learn since they were allowed to make their own decisions and construct their own knowledge, asking for help only when they really felt they needed it. <br />
<br />
Finally, why were differences only observed for conceptual questions? This can be explained by the nature of the camping problem, which is focused on conceptual aspects of thermo chemistry. That is, the camping problem, and use of the VLab to solve it, focused students on running experiments to learn concepts, rather than procedures or calculations. The procedure and calculations necessary to solve the <br />
near-transfer problems were done outside of the VLab in all conditions; thus, we would not (necessarily) expect that any of the conditions would do better than the others in the near-transfer part of the posttest.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10113McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:47:53Z<p>Bmclaren: /* Abstract */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one instance of the [[assistance dilemma]], an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called [[assistance dilemma]] is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''Near-transer posttest'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''Conceptual-understanding posttest'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10112McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:47:22Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called [[assistance dilemma]] is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''Near-transer posttest'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''Conceptual-understanding posttest'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10111McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:46:59Z<p>Bmclaren: /* Dependent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''Near-transer posttest'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''Conceptual-understanding posttest'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10110McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:46:34Z<p>Bmclaren: /* Dependent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''[[Near-transer posttest]]'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''[[Conceptual-understanding posttest]]'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a long-term retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10109McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:46:12Z<p>Bmclaren: /* Dependent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''[[Near-transer posttest]]'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''[[Conceptual-understanding posttest]]'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students for a single class period, we were unable to do a retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10108McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:45:55Z<p>Bmclaren: /* Dependent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
* ''[[Near-transer posttest]]'': Subdivided into Task 1, which was a collection of several multiple-choice questions, and Task 2, in which students had to use the proportionality of temperature change to the concentration for a calculation. The near-transfer portion of the posttest probed the student’s understanding of the direct proportionality between temperature change and solution concentration. <br />
* ''[[Conceptual-understanding posttest]]'': Two items for which responses were given as free-form text. In the first item, students were asked to write a general design strategy for how to create a solution with a desired temperature. The second item restated the goal of the activity (heating food while on a camping trip) and asked students to list the factors of this approach that would limit meeting this goal.<br />
<br />
Because we only had access to students only for a single class period, we were unable to do a retention posttest.<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10107McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:41:57Z<p>Bmclaren: /* Explanation */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10106McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:41:17Z<p>Bmclaren: /* Overview */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, conducting classroom study, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10105McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:40:39Z<p>Bmclaren: /* Hypothesis */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
Our hypothesis was that students would learn most effectively when assistance giving and withholding are balanced, i.e., in the Tutored Condition.<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10104McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:39:28Z<p>Bmclaren: /* Explanation */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
In summary, we observed differences between the three conditions in conceptual understanding, where students in the Tutored Condition scored higher than students in the other conditions. In addition, stronger students in the Tutored Condition had better results than stronger students in the other conditions on the conceptual questions. So why did students in the Tutored Condition achieve greater conceptual understanding? One possible explanation is that the tutored students were able to make more active decisions, leading to higher motivation. At the same time, they received help when they needed it, which may have prevented frustration. Both of these aspects may, in turn, have led to more learning. In contrast, students in the Direct-instruction Condition may have been demotivated, unable to make their own decisions; that is, they may have received too much assistance for learning. This was hinted at by some comments in the feedback questionnaire, e.g. “I disliked having to follow the instructions. It‘s like communist chemistry.” Students in the Inquiry-learning Condition, on the other hand, may have gotten frustrated when they did not know what to do and did not work as hard at learning; that is, they may have received too little assistance. This was suggested by some feedback in the questionnaire, e.g., “It makes me feel really stupid.” Both of these comments are consistent with our classroom observation of the students in the two conditions.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=File:BorekEtAlResults.jpg&diff=10103File:BorekEtAlResults.jpg2009-11-20T23:36:25Z<p>Bmclaren: </p>
<hr />
<div></div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10102McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:36:01Z<p>Bmclaren: /* Findings */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below.<br />
<br />
[[Image:BorekEtAlResults.jpg|600px|center]] <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10101McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:32:26Z<p>Bmclaren: /* Findings */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below. <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
Finally, we segmented students into strong (best 50%) and weak (worst 50%) ups based on their pretest scores. In another ANCOVA, again using pretest scores as the covariate, students in the Tutored Condition who did better on the pretest benefitted more regarding conceptual understanding than students in the other conditions, F(2,37)=4.699, p=.015. Weaker students in the Tutored Condition also did better on the conceptual-understanding part than weaker students in the other conditions, but not significantly, F(2,37)=1.193, p=.315.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10100McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:31:36Z<p>Bmclaren: /* Findings */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below. <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored Condition did better on conceptual-understanding tasks than students in the other two conditions, F(2,77)=3.783, p=.007. These results support our hypothesis: Students in the Tutored Condition – the mid-level assistance approach – showed better learning results than students in the other two conditions.<br />
<br />
<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10099McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:30:36Z<p>Bmclaren: /* Findings */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
We first scored and ran an ANOVA on students’ pretests, to assure equality between conditions, with conditions as a between-subjects factor. Tasks had only one acceptable solution and were graded by a program. As there was no significant difference in the pretest between the three conditions, F(2,77)=0.292, p=.748, we assume that students in the three conditions started with a similar level of knowledge. <br />
<br />
Next, we evaluated the posttest scores. Tasks in the near-transfer part of the posttest also had only one acceptable solution and were scored by a program. Three reviewers graded the conceptual-understanding tasks of the posttest, answered in free-form text, using the same rubric to ensure objectivity. In approximately 90% of cases there was agreement by at least two graders, in the other 10% the average of all three grades was taken. We removed seven outliers from the population – students who scored less than a quarter of the maximal reachable points in the posttest. The means of the overall posttest scores, as well as the means of the individual components of the posttest (i.e., the near-transfer scores and conceptual-understanding scores), are shown below. <br />
<br />
We then ran ANCOVAs on the posttest scores, using the pretest scores as the covariate, to evaluate differences in the posttest scores between the conditions. Although the mean scores were higher in the Tutored Condition for both the overall score and the near-transfer score, the differences were not significant, F(2,77)=2.035, p=.138; F(2,77)=0.057, p=.944. However, we did find a significant result on the conceptual-understanding part of the posttest: Students in the Tutored<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10097McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:26:59Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, Mayer, 2004) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10096McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:26:33Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. Kirschner, Sweller, & Clark, 2006, Klahr & Nigam, 2004, ) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10095McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:25:18Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10094McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:25:04Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. Bruner, 1961, Steffe & Gale, J. 1995) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10093McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:24:31Z<p>Bmclaren: /* References */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19.<br />
* Bruner, J.S. (1961). The Art of Discovery. Harvard Educational Review (31), 21—32.<br />
* Steffe, L. & Gale, J. (1995). Constructivism in Education. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10092McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:22:45Z<p>Bmclaren: /* References */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19 (2004)</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10091McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:22:14Z<p>Bmclaren: /* References */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19 (2004)</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10090McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:22:03Z<p>Bmclaren: /* References */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Kirschner, P.A., Sweller, J., & Clark, R.E. (2006). Why Minimal Guidance During Instruction <br />
Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching. Educational Psychologist, 75—86.<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.<br />
*Klahr, D. & Nigam, M. (2004). The Equivalence of Learning Paths in Early Science Instruction - Effects of Direct Instruction and Discovery Learning. Psychological Science, 661—667. <br />
*Mayer, R.E. (2004). Should There Be a Three-Strikes Rule Against Pure Discovery Learning? - The Case for Guided Methods of Instruction. American Psychologist, 14—19 (2004)</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10089McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:19:30Z<p>Bmclaren: /* Independent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''The Inquiry-learning Condition'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''The Tutored Condition'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''The Direct-instruction Condition'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10088McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:17:28Z<p>Bmclaren: /* Independent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) ''[[The Inquiry-learning Condition]]'', in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) ''[[The Tutored Condition]]'', in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) ''[[The Direct-instruction Condition]]'', in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10087McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:15:55Z<p>Bmclaren: /* The Assistance Dilemma and Discovery Learning */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Cognitive Factors]] thrust.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10086McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:15:04Z<p>Bmclaren: /* Annotated Bibliography */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.<br />
<br />
===References===<br />
<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10085McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:14:24Z<p>Bmclaren: /* References */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Koedinger, K.R. & Aleven, V. (2007). Exploring the Assistance Dilemma in Experiments with Cognitive Tutors. Educational Psychology Review 19, 239—264.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10084McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:13:28Z<p>Bmclaren: /* Background and Significance */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
A key goal of educational technology research is to find the right level of support to imbue in computer-based educational systems. The so-called assistance dilemma is central to this goal: “How should learning environments balance assistance giving and withholding to achieve optimal student learning?” (Koedinger & Aleven, 2007). Assistance giving allows students to move forward when they are struggling and truly need help, yet can rob them of the motivation to learn on their own. On the other hand, assistance withholding encourages students to think and learn for themselves, yet can cause frustration when they are unsure of what to do next. <br />
<br />
Although the “assistance dilemma” is a relatively new term, it describes a central issue in the learning sciences that has been debated for some time. The extreme position of assistance giving is usually called direct-instruction or guided learning. <br />
Supporters of this position (e.g. [2,3,4]) argue that higher assistance (direct instruction and/or tutoring of basic skills) leads to better learning results because it provides information that students cannot create on their own. Supporters of the opposing position (e.g. [5,6,7,8]) advocate a much lower assistance approach (i.e.,assistance withholding), often called discovery or inquiry learning.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10083McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:11:13Z<p>Bmclaren: /* Hypothesis */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10082McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:10:47Z<p>Bmclaren: /* Independent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* (Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* (Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* (Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10081McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:10:28Z<p>Bmclaren: /* Independent Variables */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
* Condition 1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
* Condition 2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track, and <br />
* Condition 3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path.<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10080McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:09:27Z<p>Bmclaren: /* The Assistance Dilemma and Discovery Learning */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The study compared three conditions in which students used different versions of the VLab to solve problems in thermo chemistry: <br />
1) the Inquiry-learning Condition, in which students worked with a version of VLab with no hints and minimal feedback, <br />
2) the Tutored Condition, in which students could request hints and received feedback only when they were severely off track1, and <br />
3) the Direct-instruction Condition, in which students were directed to follow a prescribed problem-solving path. <br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10079McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:07:29Z<p>Bmclaren: /* Research Questions */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
How much help helps in discovery learning?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The independent variables we will experiment with in our studies are politeness (either direct or polite) and audio (hints & feedback in audio or text). <br />
<br />
These variables will be crossed, leading to a 2x2 factorial design with the following conditions.<br />
<br />
* ''Condition 1: Polite-Audio'': Students work with the stoichiometry tutor that provides polite statements that are spoken<br />
<br />
[[Image:Cond1-PoliteAudio.jpg|600px|center]]<br />
<br />
* ''Condition 2: Polite-Text'': Students work with the stoichiometry tutor that provides polite statements that are in text only<br />
<br />
[[Image:Cond2-PoliteText.jpg|600px|center]]<br />
<br />
* ''Condition 3: Direct-Audio'': Students work with the stoichiometry tutor that provides direct statements that are spoken<br />
<br />
[[Image:Cond3-DirectAudio.jpg|600px|center]]<br />
<br />
* ''Condition 4: Direct-Text'': Students work with the stoichiometry tutor that provides direct statements that are in text only<br />
<br />
[[Image:Cond4-DirectText.jpg|600px|center]]<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10078McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:05:30Z<p>Bmclaren: /* Glossary */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[Assistance dilemma]]<br />
<br />
===Research Questions===<br />
<br />
Do polite feedback and hints within a computer tutor lead to more robust learning than direct feedback and hints? <br />
<br />
Does polite, audio feedback and hints within a computer tutor lead to more robust learning than text feedback and hints (whether polite or direct)?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The independent variables we will experiment with in our studies are politeness (either direct or polite) and audio (hints & feedback in audio or text). <br />
<br />
These variables will be crossed, leading to a 2x2 factorial design with the following conditions.<br />
<br />
* ''Condition 1: Polite-Audio'': Students work with the stoichiometry tutor that provides polite statements that are spoken<br />
<br />
[[Image:Cond1-PoliteAudio.jpg|600px|center]]<br />
<br />
* ''Condition 2: Polite-Text'': Students work with the stoichiometry tutor that provides polite statements that are in text only<br />
<br />
[[Image:Cond2-PoliteText.jpg|600px|center]]<br />
<br />
* ''Condition 3: Direct-Audio'': Students work with the stoichiometry tutor that provides direct statements that are spoken<br />
<br />
[[Image:Cond3-DirectAudio.jpg|600px|center]]<br />
<br />
* ''Condition 4: Direct-Text'': Students work with the stoichiometry tutor that provides direct statements that are in text only<br />
<br />
[[Image:Cond4-DirectText.jpg|600px|center]]<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10077McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:03:58Z<p>Bmclaren: /* Abstract */</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sciences and educational technology research. To explore this question, we conducted a study involving 87 college students solving problems in a virtual chemistry laboratory (VLab), testing three points along an assistance continuum: (1) a minimal assistance, inquiry-learning approach, in which students used the VLab with no hints and minimal feedback; (2) a mid-level assistance, tutored approach, in which students received intelligent tutoring hints and feedback while using the VLab (i.e., help given on request and feedback on incorrect steps); and (3) a high assistance, direct-instruction approach, in which students were coaxed to follow a specific set of steps in the VLab. Although there was no difference in learning results between conditions on near transfer posttest questions, students in the tutored condition did significantly better on conceptual posttest questions than students in the other two conditions. Furthermore, the more advanced students in the tutored condition, those who performed better on a pretest, did significantly better on the conceptual posttest than their counterparts in the other two conditions. Thus, it appears that students in the tutored condition had just the right amount of assistance, and that the better students in that condition used their superior metacognitive skills and/or motivation to decide when to use the available assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[E-Learning Principles]] <br />
*[[Personalization]]<br />
*[[Politeness Principle]]<br />
*[[Modality Principle]]<br />
<br />
===Research Questions===<br />
<br />
Do polite feedback and hints within a computer tutor lead to more robust learning than direct feedback and hints? <br />
<br />
Does polite, audio feedback and hints within a computer tutor lead to more robust learning than text feedback and hints (whether polite or direct)?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The independent variables we will experiment with in our studies are politeness (either direct or polite) and audio (hints & feedback in audio or text). <br />
<br />
These variables will be crossed, leading to a 2x2 factorial design with the following conditions.<br />
<br />
* ''Condition 1: Polite-Audio'': Students work with the stoichiometry tutor that provides polite statements that are spoken<br />
<br />
[[Image:Cond1-PoliteAudio.jpg|600px|center]]<br />
<br />
* ''Condition 2: Polite-Text'': Students work with the stoichiometry tutor that provides polite statements that are in text only<br />
<br />
[[Image:Cond2-PoliteText.jpg|600px|center]]<br />
<br />
* ''Condition 3: Direct-Audio'': Students work with the stoichiometry tutor that provides direct statements that are spoken<br />
<br />
[[Image:Cond3-DirectAudio.jpg|600px|center]]<br />
<br />
* ''Condition 4: Direct-Text'': Students work with the stoichiometry tutor that provides direct statements that are in text only<br />
<br />
[[Image:Cond4-DirectText.jpg|600px|center]]<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning&diff=10076McLaren - The Assistance Dilemma And Discovery Learning2009-11-20T23:02:42Z<p>Bmclaren: New page: ==The Assistance Dilemma and Discovery Learning== Bruce M. McLaren ===Overview=== PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh Others who have contributed 160 hours or ...</p>
<hr />
<div>==The Assistance Dilemma and Discovery Learning==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PI: Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* Alex Borek, University of Karlsruhe, Germany, research, programming, statistical analysis<br />
* Dave Yaron, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
* Mike Karabinos, Carnegie Mellon University, Chemistry domain expertise, Support of classroom study<br />
<br />
===Abstract===<br />
<br />
How much help helps in discovery learning? This question is one <br />
instance of the assistance dilemma, an important issue in the learning sci- <br />
ences and educational technology research. To explore this question, we <br />
conducted a study involving 87 college students solving problems in a virtual <br />
chemistry laboratory (VLab), testing three points along an assistance contin- <br />
uum: (1) a minimal assistance, inquiry-learning approach, in which students <br />
used the VLab with no hints and minimal feedback; (2) a mid-level assis- <br />
tance, tutored approach, in which students received intelligent tutoring hints <br />
and feedback while using the VLab (i.e., help given on request and feedback <br />
on incorrect steps); and (3) a high assistance, direct-instruction approach, in <br />
which students were coaxed to follow a specific set of steps in the VLab. Al- <br />
though there was no difference in learning results between conditions on near <br />
transfer posttest questions, students in the tutored condition did significantly <br />
better on conceptual posttest questions than students in the other two condi- <br />
tions. Furthermore, the more advanced students in the tutored condition, <br />
those who performed better on a pretest, did significantly better on the con- <br />
ceptual posttest than their counterparts in the other two conditions. Thus, it <br />
appears that students in the tutored condition had just the right amount of as- <br />
sistance, and that the better students in that condition used their superior <br />
metacognitive skills and/or motivation to decide when to use the available <br />
assistance to their best advantage.<br />
<br />
===Glossary===<br />
<br />
*[[E-Learning Principles]] <br />
*[[Personalization]]<br />
*[[Politeness Principle]]<br />
*[[Modality Principle]]<br />
<br />
===Research Questions===<br />
<br />
Do polite feedback and hints within a computer tutor lead to more robust learning than direct feedback and hints? <br />
<br />
Does polite, audio feedback and hints within a computer tutor lead to more robust learning than text feedback and hints (whether polite or direct)?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The independent variables we will experiment with in our studies are politeness (either direct or polite) and audio (hints & feedback in audio or text). <br />
<br />
These variables will be crossed, leading to a 2x2 factorial design with the following conditions.<br />
<br />
* ''Condition 1: Polite-Audio'': Students work with the stoichiometry tutor that provides polite statements that are spoken<br />
<br />
[[Image:Cond1-PoliteAudio.jpg|600px|center]]<br />
<br />
* ''Condition 2: Polite-Text'': Students work with the stoichiometry tutor that provides polite statements that are in text only<br />
<br />
[[Image:Cond2-PoliteText.jpg|600px|center]]<br />
<br />
* ''Condition 3: Direct-Audio'': Students work with the stoichiometry tutor that provides direct statements that are spoken<br />
<br />
[[Image:Cond3-DirectAudio.jpg|600px|center]]<br />
<br />
* ''Condition 4: Direct-Text'': Students work with the stoichiometry tutor that provides direct statements that are in text only<br />
<br />
[[Image:Cond4-DirectText.jpg|600px|center]]<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Cognitive_Factors&diff=10075Cognitive Factors2009-11-20T22:59:27Z<p>Bmclaren: /* Descendents */</p>
<hr />
<div>The research in this thrust is aimed at understanding cognitive learning—changes in knowledge—that result from [[instructional events]]. It builds on work in the learning sciences field at large and on research carried out in the PSLC over its first four years within the [[Refinement and Fluency]] cluster and part of the [[Coordinative Learning]] cluster, thereby merging two themes that organized the first phase of the PSLC. Each of these clusters was concerned with identifying instructional events that produce robust learning. They differed mainly in that the relevant theme within the Coordinative Learning cluster had a specific focus on instructional events that included more than one input. (A second theme within the Coordinative Learning cluster was on instructional events that provoke learning events involving more than one reasoning method and this theme will be continued in the [[Metacognition and Motivation]] thrust). In the fifth year of the PSLC, we carry forward research from each of these clusters, while making a transition to an additional set of research questions. Although we frame this section in terms of the new Cognitive Factors thrust, the research carried out during the 5th year has been initiated in the current Refinement and Fluency and in part of the Coordinative Learning clusters. <br />
<br />
Our work on cognitive factors encompasses a triangulated set of events around learning: learning events, instructional events, and assessment events. Anything from a lesson to an entire curriculum can be considered a sequence of events whose durations vary from seconds to semesters. The hypotheses of the Cognitive Factors Thrust concern how instructional procedures (e.g., decisions about the learner’s task, materials, practice, feedback) affect learning events and thus the outcomes of learning. Learning involves the acquisition of [[knowledge components]], an increase in the [[feature validity]] and the [[strength]] of these components, and the integration of these components through practice. Our basic hypotheses include the following:<br />
<br />
* Explicitness: Instruction that draws the learner’s attention to valid features that support the relevant knowledge components leads to more robust learning than instruction that does not.<br />
* Assistance: The degree of assistance in the instruction affects learning in relation to student knowledge on specific knowledge components.<br />
* Practice: Practice schedules can be optimized using models of learning based on memory activation assumptions.<br />
* Integration: Knowledge components that are integrated during learning and practice lead to more robust learning and fluent performance across different tasks. <br />
<br />
The research plan tests these hypotheses across knowledge domains, as exemplified by the following projects:<br />
<br />
''Language background factors in L2 learning''. This work illustrates the synergies that develop in the PSLC’s LearnLab context, in this case between English as a second language (ESL) director Alan Juffs and other PSLC language researchers. In a prior cluster meeting, Juffs presented ESL classroom data that compared various L1 background students in their performance on transcribing their own speech, a standard piece of instruction in the ESL curriculum. The result that caught the interest of PSLC researchers (Dunlap, Guan, Perfetti) was the very poor spelling performance of Arabic-background students, relative to Spanish, Korean, and Chinese ESL students, despite comparable levels of spoken language performance. Furthermore, Juffs identified this discrepancy as a long-standing one in ESL instruction. Although one might hypothesize that a key factor is orthographic differences between L1 and L2, this seems unlikely here. Spanish to English is closer, but Chinese to English is farther in L1-L2 orthographic similarity. The first steps toward a new study have been taken with the help of a PSLC summer intern, who coded the errors made in spelling by all L1 background learners. The pattern of errors can be characterized as qualitatively similar, differing across languages quantitatively, suggesting a generalized English spelling problem. This analysis has led to the hypothesis that feature focusing—attention to full spelling patterns—is different across the L1 backgrounds, which we will test in a training experiment that focuses attention on spelling patterns.<br />
<br />
''Second language vocabulary learning''. Another new project originating within the Refinement and Fluency cluster will study English vocabulary learning using REAP. Based on recent research by Balass on the trade-offs between explicit (dictionary-based) and implicit (inferences from text) instruction in learning new words by monolingual subjects (Bolger et al, 2008), the new work will apply this tradeoff idea to second language learners. The hypothesis is that allowing learners to view definitions is more effective after they have read a sentence containing the word to be learned. This hypothesis reflects ideas about assistance (giving a definition versus inferring it) and the assumption that learning word meanings from context depends on the overlapping memory traces established by specific encounters with the word (Bolger et al, 2008). REAP allows us to use authentic texts for studies with students of various L1 backgrounds learning English through reading texts in their areas of interest. In our experiments, we will vary the availability of definitions provided on-line as part of the text reading. <br />
<br />
''Explicit instruction and practice schedules in algebra and second language learning''. Foreign language learning in classrooms has stimulated research on explicit vs implicit instruction, with conclusions favoring the value of explicit instruction (Norris and Ortega, 2000). A major conclusion from PSLC work is that instruction that draws attention to critical valid features—“feature focusing”—is important in acquiring knowledge components for complex tasks. This conclusion has evidence from studies of L2 learning of the English grammar by Levin, Friskoff, Pavlik, studies of radical learning by Dunlap et al and by Pavlik, and by studies by Zhang and MacWhinney and by Liu et al on learning spoken syllables through pin-yin (alphabetic spellings). Projects in French dictation (MacWhinney) and French grammar (Presson & MacWhinney), Chinese dictation (Zhang & MacWhinney), algebra (Pavlik) and arithmetical computation (Fiez) also reflect this theme. Much of this work has been combined with completely general hypotheses about practice, based on Pavlik and Anderson (2005)’s model that describes the trade-off between the benefit of spaced practice and the cost of longer retention intervals brought by spacing. The resulting optimized practice schedule has been tested in several PSLC studies of vocabulary learning in Chinese (Pavlik, MacWhinney, Koedinger; reported in Pavlik, 2006), cues to French gender (Presson, MacWhinney, & Pavlik). Important is the generality of the optimization model. It applies to all domain content and studies in both algebra and second language learning have nee carried out. The new work in second language and in algebra builds on the synergies that have emerged from collaborations between domain researchers (e.g. MacWhinney) and Pavlik around experiments and models for optimizing practice. For Chinese, MacWhinney, Zhang, and Pavlik have developed a tutor for Chinese dictation and vocabulary learning that is being used in 18 sites. Data from these sites will be used to test the results of practice schedules and the form of instructional events (e.g. cues to gender in French) with longer term measures of robust learning. Because each of the tutors logs results to DataShop, the student records are a rich source of data for further study, including researchers beyond the PSLC. <br />
<br />
''Learning the logic of unconfounded experiments.'' We will extend our research on college level science topics (chemistry and physics) to middle school science, with a focus on the cross-domain topic of experimental design. The ability to design unconfounded experiments and make valid inferences from their outcomes is an essential skill in scientific reasoning. The key idea here is CVS: the Control of Variables Strategy. CVS is the fundamental idea underlying the design of unconfounded experiments from which valid, causal, inferences can be made. Its acquisition is an important step in the development of scientific reasoning skills , because it provides a strong constraint on search in the space of experiments (Klahr, 2000). The Tutor for Experimental Design (TED), developed by Klahr’s research team, builds on previous work studying the different paths of learning and transfer that result from teaching CVS using different instructional methods that span from direct instruction to discovery (Chen & Klahr, 1999) and show differences along the “physical-virtual” dimension (Triona & Klahr, 2007). We build on this by constructing of a semi-autonomous tutor, then developing a full computer based tutor in Pittsburgh middle school LearnLabs and carrying out in vivo experiments with TED. <br />
<br />
''Integration of knowledge components.'' Isolated knowledge components are not sufficient to produce fluent use of knowledge. Integrating knowledge components is important both in authentic practice that follows acquisition of knowledge components but, we hypothesize, also in the initial acquisition of components. Some of our prior work in coordinative learning establishes some of the conditions that favor multiple inputs during learning (e.g., Davenport et al in stochiometry). And experiments on fluency support the value of repeated practice in single-topic speaking as way to support fluency (de Jong, Halderman and Perfetti). In new work we propose to build on progress we have made in the study of fluency in language (de Jong et al) and arithmetic (Fiez). For example, we will follow the discovery by de Jong and colleagues that when L2 speakers repeat a speech on a single topic, their fluency scores increase on a number of measures. We will test the hypothesis that this results from the advantage of retrieving the same conceptual and lexical knowledge and overall speech plan on successive attempts, allowing fluency to increase on procedural components supported by chunking of words to phrases. We are accumulating a large database in the English LearnLab that will support the testing of additional hypotheses. The idea that some relatively simple learning (e.g. 3-5 knowledge components) is supported by integration from the beginning is being tested by Liu, Guan & Perfetti in a study of learning to read Chinese characters. The hypothesis is that when students write unfamiliar characters within the same 60-second time period that they read the character and try to learn its meaning and pronunciation, they will show more robust learning measured by reading tasks. Underlying this hypothesis is the idea that the representation of a character (or other objects that follow structural principles) can be perceptual-motor as well as visual.<br />
<br />
== Descendents ==<br />
<br />
*[[Klahr - TED]]<br />
*[[Perfetti - Read Write Integration]]<br />
*[[MacWhinney - Second Language Grammar]]<br />
*[[Juffs - Feature Focus in Word Learning]]<br />
*[[de Jong - Fluency]]<br />
*[[McLaren_-_The_Assistance_Dilemma_And_Discovery_Learning | McLaren - The Assistance Dilemma and Discovery Learning]]<br />
*[[Wylie - Intelligent Writing Tutor]]<br />
*[[Eskenazi - REAP]]<br />
<br />
=== References ===<br />
* Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=McLaren_Assistance_Dilemma_And_Discovery_Learning&diff=10074McLaren Assistance Dilemma And Discovery Learning2009-11-20T22:56:55Z<p>Bmclaren: New page: ==How Much Assistance is Needed for Discovery Learning?== Bruce M. McLaren ===Overview=== PIs: Richard Mayer, University of California, Santa Barbara, Bruce M. McLaren, Carnegie Mellon ...</p>
<hr />
<div>==How Much Assistance is Needed for Discovery Learning?==<br />
<br />
Bruce M. McLaren<br />
<br />
===Overview===<br />
<br />
PIs: Richard Mayer, University of California, Santa Barbara, Bruce M. McLaren, Carnegie Mellon University, Pittsburgh<br />
<br />
Others who have contributed 160 hours or more:<br />
<br />
* John laPlante, Carnegie Mellon University, programming and website design and deployment<br />
* Krista DeLeeuw, University of California, Santa Barbara, statistical analysis<br />
* Brett Leber, Carnegie Mellon University, programming and website design<br />
* Shawn Snyder, Carnegie Mellon University, programming<br />
* Seiji Isotani, Carnegie Mellon University, research and programming<br />
<br />
We have revised the lessons and tests of [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al's stoichiometry tutor]] for the purpose of this new study. We have also created “voice” versions of each lesson in which the tutor speaks using a friendly human voice, providing the student with hints and error feedback. <br />
<br />
In early 2009, we ran both a lab study at the University of California, with over 100 subjects, and a classroom study in 4 high schools in the eastern United States, also with over 100 subjects. We are currently analyzing the results -- preliminary results are discussed below in the "Findings" section -- and we will write a paper based on the studies for submittal to the Journal of Educational Psychology in the summer of 2009.<br />
<br />
===Abstract===<br />
<br />
The goal of this project is to examine how student learning is affected by social cues in computer-based learning environments, such as the conversational style of online cognitive tutors. In particular, students will learn how to solve stoichiometry problems in the Chemistry LearnLab, using a cognitive tutor that provides hints and feedback in direct style or in polite style (McLaren, Lim, Yaron, & Koedinger, 2007). The stoichiometry tutor has been used for other PSLC studies, in particular those by [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al]] that have investigated personalization, politeness, and worked examples.<br />
<br />
Our study is based on Brown and Levinson’s (1987) theory of politeness, which specifies how people create polite requests; Reeves and Nass’ (1996, 2005) media equation theory, which specifies the conditions under which people accept computers as conversational partners; and Mayer’s (2005) personalization principle in which people work harder to learn when they feel they are in a conversation with a tutor. Our working hypothesis is that learners work harder to make sense of lessons when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners (Mayer, 2005; Wang, Johnson, Mayer, Rizzo, Shaw, & Collins, 2008).<br />
<br />
===Glossary===<br />
<br />
*[[E-Learning Principles]] <br />
*[[Personalization]]<br />
*[[Politeness Principle]]<br />
*[[Modality Principle]]<br />
<br />
===Research Questions===<br />
<br />
Do polite feedback and hints within a computer tutor lead to more robust learning than direct feedback and hints? <br />
<br />
Does polite, audio feedback and hints within a computer tutor lead to more robust learning than text feedback and hints (whether polite or direct)?<br />
<br />
===Hypothesis===<br />
<br />
We have two hypotheses, based on these research questions, with the second built on the first:<br />
<br />
;H1<br />
:Students will experience more robust learning when they work with polite rather than direct tutors, because learners are more likely to accept polite tutors as conversational partners<br />
<br />
;H2<br />
:Students will experience more robust learning when they work with polite tutors that provide audio feedback and hints rather than polite or direct tutors that provide no audio feedback, because learners are more likely to accept audio polite tutors as conversational partners<br />
<br />
===Background and Significance===<br />
<br />
The polite tutor uses politeness strategies developed by Brown and Levinson (1978) in which the goal is to save positive face--allowing the learner to feel appreciated and respected by the conversational partner--and to save negative face--allowing the learner to feel that his or her freedom of action is unimpeded by the other party in the conversation. After interacting with the stoichiometry tutor on solving a series of problems for several hours, learners will be given a transfer test based on the underlying principles--including an immediate test and a delayed test. We expect learners who had the polite tutor to perform substantially better on the transfer test than learners who had the direct tutor.<br />
<br />
We will also experiment with Clark & Mayer's Modality Principle, in which audio narration replaces onscreen text.<br />
<br />
===Independent Variables===<br />
<br />
The independent variables we will experiment with in our studies are politeness (either direct or polite) and audio (hints & feedback in audio or text). <br />
<br />
These variables will be crossed, leading to a 2x2 factorial design with the following conditions.<br />
<br />
* ''Condition 1: Polite-Audio'': Students work with the stoichiometry tutor that provides polite statements that are spoken<br />
<br />
[[Image:Cond1-PoliteAudio.jpg|600px|center]]<br />
<br />
* ''Condition 2: Polite-Text'': Students work with the stoichiometry tutor that provides polite statements that are in text only<br />
<br />
[[Image:Cond2-PoliteText.jpg|600px|center]]<br />
<br />
* ''Condition 3: Direct-Audio'': Students work with the stoichiometry tutor that provides direct statements that are spoken<br />
<br />
[[Image:Cond3-DirectAudio.jpg|600px|center]]<br />
<br />
* ''Condition 4: Direct-Text'': Students work with the stoichiometry tutor that provides direct statements that are in text only<br />
<br />
[[Image:Cond4-DirectText.jpg|600px|center]]<br />
<br />
===Dependent Variables===<br />
<br />
Our plan is to include the following robust learning dependent variables in our studies.<br />
<br />
* ''[[Normal post-test]]'': Students will take an immediate post-test, right after completing work with the stoichiometry tutor<br />
* ''[[Transfer]]'': Conceptual, transfer questions will be included in the post-tests<br />
* ''[[Long-term retention]]'': Students will take a second post-test, including conceptual, transfer questions, 7 days after the initial post-test<br />
<br />
===Findings===<br />
<br />
As mentioned above, a lab study with over 100 subjects was run in early 2009 at the University of California with the above conditions. College students learned to solve chemistry stoichiometry problems with the stoichiometry tutor through hints and feedback, either polite or direct, as described above. There was a pattern in which students with low prior knowledge of chemistry performed better on subsequent problem-solving tests if they learned from the polite tutor rather than the direct tutor (d = .73 on an immediate test, d = .46 on a delayed test), whereas students with high prior knowledge showed the reverse trend (d = -.49 for an immediate test; d = -.13 for a delayed test). On the other hand, the high school study, also run in early 2009 with over 100 subjects, produced different results. In particular, the high school students did not show a pattern in which students with low prior knowledge of chemistry performed better on subsequent tests. We are still analyzing the audio feature of the study, i.e., the comparison of audio to text hints and messages, but preliminary results indicate that adding audio hurt the performance of high knowledge learners and helped low knowledge learners on the delayed test.<br />
<br />
===Explanation===<br />
<br />
This study is part of the [[Computational Modeling and Data Mining]] thrust.<br />
<br />
Our explanation for the specific findings from our experiment are soon forthcoming. We are currently preparing a paper for the journal of educational psychology that will provide such an explanation.<br />
<br />
=== Connections to Other PSLC Studies===<br />
<br />
* This study has a clear connection to the [[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | McLaren et al study]] , in that both studies explore the effect of personalized, polite hints and feedback. In fact, it was through McLaren's original studies, built on earlier work on e-Learning principles by Mayer, that Mayer and McLaren decided to join forces.<br />
<br />
===Annotated Bibliography===<br />
<br />
*McLaren, B.M., DeLeeuw, K.E., & Mayer, R.E. (submitted). A Politeness Effect in Learning with Web-Based Intelligent Tutors. Submitted to the Journal of Human Computer Studies.<br />
<br />
===References===<br />
<br />
*Brown, P., & Levinson, S. C. (1987). Politeness: Some universals in language usage. New York: Cambridge University Press.<br />
*Mayer, R. E. (2005). Principles of multimedia learning based on social cues: Personalization, voice, and image principles. In R. E. Mayer (Ed.), The Cambridge handbook of multimedia learning (pp. 201-212). New York: Cambridge University Press.<br />
*McLaren, B. M., Lim, S., Yaron, D., and Koedinger, K. R. (2007). Can a Polite Intelligent Tutoring System Lead to Improved Learning Outside of the Lab? In the Proceedings of the 13th International Conference on Artificial Intelligence in Education (AIED-07), pp 331-338. [[http://www.learnlab.org/research/wiki/images/5/5a/AIED-07-PoliteTutoring.pdf pdf file]]<br />
*Nass, C., & Brave, S. (2005). Wired for speech: How voice activates and advances the human-computer relationship. Cambridge, MA: MIT Press.<br />
*Reeves, B., and Nass, C. (1996). The media equation. New York: Cambridge University Press.<br />
*Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International Journal of Human-Computer Studies, 66, 98-112.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Cognitive_Factors&diff=10073Cognitive Factors2009-11-20T22:54:50Z<p>Bmclaren: /* Descendents */</p>
<hr />
<div>The research in this thrust is aimed at understanding cognitive learning—changes in knowledge—that result from [[instructional events]]. It builds on work in the learning sciences field at large and on research carried out in the PSLC over its first four years within the [[Refinement and Fluency]] cluster and part of the [[Coordinative Learning]] cluster, thereby merging two themes that organized the first phase of the PSLC. Each of these clusters was concerned with identifying instructional events that produce robust learning. They differed mainly in that the relevant theme within the Coordinative Learning cluster had a specific focus on instructional events that included more than one input. (A second theme within the Coordinative Learning cluster was on instructional events that provoke learning events involving more than one reasoning method and this theme will be continued in the [[Metacognition and Motivation]] thrust). In the fifth year of the PSLC, we carry forward research from each of these clusters, while making a transition to an additional set of research questions. Although we frame this section in terms of the new Cognitive Factors thrust, the research carried out during the 5th year has been initiated in the current Refinement and Fluency and in part of the Coordinative Learning clusters. <br />
<br />
Our work on cognitive factors encompasses a triangulated set of events around learning: learning events, instructional events, and assessment events. Anything from a lesson to an entire curriculum can be considered a sequence of events whose durations vary from seconds to semesters. The hypotheses of the Cognitive Factors Thrust concern how instructional procedures (e.g., decisions about the learner’s task, materials, practice, feedback) affect learning events and thus the outcomes of learning. Learning involves the acquisition of [[knowledge components]], an increase in the [[feature validity]] and the [[strength]] of these components, and the integration of these components through practice. Our basic hypotheses include the following:<br />
<br />
* Explicitness: Instruction that draws the learner’s attention to valid features that support the relevant knowledge components leads to more robust learning than instruction that does not.<br />
* Assistance: The degree of assistance in the instruction affects learning in relation to student knowledge on specific knowledge components.<br />
* Practice: Practice schedules can be optimized using models of learning based on memory activation assumptions.<br />
* Integration: Knowledge components that are integrated during learning and practice lead to more robust learning and fluent performance across different tasks. <br />
<br />
The research plan tests these hypotheses across knowledge domains, as exemplified by the following projects:<br />
<br />
''Language background factors in L2 learning''. This work illustrates the synergies that develop in the PSLC’s LearnLab context, in this case between English as a second language (ESL) director Alan Juffs and other PSLC language researchers. In a prior cluster meeting, Juffs presented ESL classroom data that compared various L1 background students in their performance on transcribing their own speech, a standard piece of instruction in the ESL curriculum. The result that caught the interest of PSLC researchers (Dunlap, Guan, Perfetti) was the very poor spelling performance of Arabic-background students, relative to Spanish, Korean, and Chinese ESL students, despite comparable levels of spoken language performance. Furthermore, Juffs identified this discrepancy as a long-standing one in ESL instruction. Although one might hypothesize that a key factor is orthographic differences between L1 and L2, this seems unlikely here. Spanish to English is closer, but Chinese to English is farther in L1-L2 orthographic similarity. The first steps toward a new study have been taken with the help of a PSLC summer intern, who coded the errors made in spelling by all L1 background learners. The pattern of errors can be characterized as qualitatively similar, differing across languages quantitatively, suggesting a generalized English spelling problem. This analysis has led to the hypothesis that feature focusing—attention to full spelling patterns—is different across the L1 backgrounds, which we will test in a training experiment that focuses attention on spelling patterns.<br />
<br />
''Second language vocabulary learning''. Another new project originating within the Refinement and Fluency cluster will study English vocabulary learning using REAP. Based on recent research by Balass on the trade-offs between explicit (dictionary-based) and implicit (inferences from text) instruction in learning new words by monolingual subjects (Bolger et al, 2008), the new work will apply this tradeoff idea to second language learners. The hypothesis is that allowing learners to view definitions is more effective after they have read a sentence containing the word to be learned. This hypothesis reflects ideas about assistance (giving a definition versus inferring it) and the assumption that learning word meanings from context depends on the overlapping memory traces established by specific encounters with the word (Bolger et al, 2008). REAP allows us to use authentic texts for studies with students of various L1 backgrounds learning English through reading texts in their areas of interest. In our experiments, we will vary the availability of definitions provided on-line as part of the text reading. <br />
<br />
''Explicit instruction and practice schedules in algebra and second language learning''. Foreign language learning in classrooms has stimulated research on explicit vs implicit instruction, with conclusions favoring the value of explicit instruction (Norris and Ortega, 2000). A major conclusion from PSLC work is that instruction that draws attention to critical valid features—“feature focusing”—is important in acquiring knowledge components for complex tasks. This conclusion has evidence from studies of L2 learning of the English grammar by Levin, Friskoff, Pavlik, studies of radical learning by Dunlap et al and by Pavlik, and by studies by Zhang and MacWhinney and by Liu et al on learning spoken syllables through pin-yin (alphabetic spellings). Projects in French dictation (MacWhinney) and French grammar (Presson & MacWhinney), Chinese dictation (Zhang & MacWhinney), algebra (Pavlik) and arithmetical computation (Fiez) also reflect this theme. Much of this work has been combined with completely general hypotheses about practice, based on Pavlik and Anderson (2005)’s model that describes the trade-off between the benefit of spaced practice and the cost of longer retention intervals brought by spacing. The resulting optimized practice schedule has been tested in several PSLC studies of vocabulary learning in Chinese (Pavlik, MacWhinney, Koedinger; reported in Pavlik, 2006), cues to French gender (Presson, MacWhinney, & Pavlik). Important is the generality of the optimization model. It applies to all domain content and studies in both algebra and second language learning have nee carried out. The new work in second language and in algebra builds on the synergies that have emerged from collaborations between domain researchers (e.g. MacWhinney) and Pavlik around experiments and models for optimizing practice. For Chinese, MacWhinney, Zhang, and Pavlik have developed a tutor for Chinese dictation and vocabulary learning that is being used in 18 sites. Data from these sites will be used to test the results of practice schedules and the form of instructional events (e.g. cues to gender in French) with longer term measures of robust learning. Because each of the tutors logs results to DataShop, the student records are a rich source of data for further study, including researchers beyond the PSLC. <br />
<br />
''Learning the logic of unconfounded experiments.'' We will extend our research on college level science topics (chemistry and physics) to middle school science, with a focus on the cross-domain topic of experimental design. The ability to design unconfounded experiments and make valid inferences from their outcomes is an essential skill in scientific reasoning. The key idea here is CVS: the Control of Variables Strategy. CVS is the fundamental idea underlying the design of unconfounded experiments from which valid, causal, inferences can be made. Its acquisition is an important step in the development of scientific reasoning skills , because it provides a strong constraint on search in the space of experiments (Klahr, 2000). The Tutor for Experimental Design (TED), developed by Klahr’s research team, builds on previous work studying the different paths of learning and transfer that result from teaching CVS using different instructional methods that span from direct instruction to discovery (Chen & Klahr, 1999) and show differences along the “physical-virtual” dimension (Triona & Klahr, 2007). We build on this by constructing of a semi-autonomous tutor, then developing a full computer based tutor in Pittsburgh middle school LearnLabs and carrying out in vivo experiments with TED. <br />
<br />
''Integration of knowledge components.'' Isolated knowledge components are not sufficient to produce fluent use of knowledge. Integrating knowledge components is important both in authentic practice that follows acquisition of knowledge components but, we hypothesize, also in the initial acquisition of components. Some of our prior work in coordinative learning establishes some of the conditions that favor multiple inputs during learning (e.g., Davenport et al in stochiometry). And experiments on fluency support the value of repeated practice in single-topic speaking as way to support fluency (de Jong, Halderman and Perfetti). In new work we propose to build on progress we have made in the study of fluency in language (de Jong et al) and arithmetic (Fiez). For example, we will follow the discovery by de Jong and colleagues that when L2 speakers repeat a speech on a single topic, their fluency scores increase on a number of measures. We will test the hypothesis that this results from the advantage of retrieving the same conceptual and lexical knowledge and overall speech plan on successive attempts, allowing fluency to increase on procedural components supported by chunking of words to phrases. We are accumulating a large database in the English LearnLab that will support the testing of additional hypotheses. The idea that some relatively simple learning (e.g. 3-5 knowledge components) is supported by integration from the beginning is being tested by Liu, Guan & Perfetti in a study of learning to read Chinese characters. The hypothesis is that when students write unfamiliar characters within the same 60-second time period that they read the character and try to learn its meaning and pronunciation, they will show more robust learning measured by reading tasks. Underlying this hypothesis is the idea that the representation of a character (or other objects that follow structural principles) can be perceptual-motor as well as visual.<br />
<br />
== Descendents ==<br />
<br />
*[[Klahr - TED]]<br />
*[[Perfetti - Read Write Integration]]<br />
*[[MacWhinney - Second Language Grammar]]<br />
*[[Juffs - Feature Focus in Word Learning]]<br />
*[[de Jong - Fluency]]<br />
*[[McLaren_Assistance_Dilemma_And_Discovery_Learning | McLaren - The Assistance Dilemma and Discovery Learning]]<br />
*[[Wylie - Intelligent Writing Tutor]]<br />
*[[Eskenazi - REAP]]<br />
<br />
=== References ===<br />
* Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.</div>Bmclarenhttps://learnlab.org/wiki/index.php?title=Cognitive_Factors&diff=10072Cognitive Factors2009-11-20T22:54:09Z<p>Bmclaren: /* Descendents */</p>
<hr />
<div>The research in this thrust is aimed at understanding cognitive learning—changes in knowledge—that result from [[instructional events]]. It builds on work in the learning sciences field at large and on research carried out in the PSLC over its first four years within the [[Refinement and Fluency]] cluster and part of the [[Coordinative Learning]] cluster, thereby merging two themes that organized the first phase of the PSLC. Each of these clusters was concerned with identifying instructional events that produce robust learning. They differed mainly in that the relevant theme within the Coordinative Learning cluster had a specific focus on instructional events that included more than one input. (A second theme within the Coordinative Learning cluster was on instructional events that provoke learning events involving more than one reasoning method and this theme will be continued in the [[Metacognition and Motivation]] thrust). In the fifth year of the PSLC, we carry forward research from each of these clusters, while making a transition to an additional set of research questions. Although we frame this section in terms of the new Cognitive Factors thrust, the research carried out during the 5th year has been initiated in the current Refinement and Fluency and in part of the Coordinative Learning clusters. <br />
<br />
Our work on cognitive factors encompasses a triangulated set of events around learning: learning events, instructional events, and assessment events. Anything from a lesson to an entire curriculum can be considered a sequence of events whose durations vary from seconds to semesters. The hypotheses of the Cognitive Factors Thrust concern how instructional procedures (e.g., decisions about the learner’s task, materials, practice, feedback) affect learning events and thus the outcomes of learning. Learning involves the acquisition of [[knowledge components]], an increase in the [[feature validity]] and the [[strength]] of these components, and the integration of these components through practice. Our basic hypotheses include the following:<br />
<br />
* Explicitness: Instruction that draws the learner’s attention to valid features that support the relevant knowledge components leads to more robust learning than instruction that does not.<br />
* Assistance: The degree of assistance in the instruction affects learning in relation to student knowledge on specific knowledge components.<br />
* Practice: Practice schedules can be optimized using models of learning based on memory activation assumptions.<br />
* Integration: Knowledge components that are integrated during learning and practice lead to more robust learning and fluent performance across different tasks. <br />
<br />
The research plan tests these hypotheses across knowledge domains, as exemplified by the following projects:<br />
<br />
''Language background factors in L2 learning''. This work illustrates the synergies that develop in the PSLC’s LearnLab context, in this case between English as a second language (ESL) director Alan Juffs and other PSLC language researchers. In a prior cluster meeting, Juffs presented ESL classroom data that compared various L1 background students in their performance on transcribing their own speech, a standard piece of instruction in the ESL curriculum. The result that caught the interest of PSLC researchers (Dunlap, Guan, Perfetti) was the very poor spelling performance of Arabic-background students, relative to Spanish, Korean, and Chinese ESL students, despite comparable levels of spoken language performance. Furthermore, Juffs identified this discrepancy as a long-standing one in ESL instruction. Although one might hypothesize that a key factor is orthographic differences between L1 and L2, this seems unlikely here. Spanish to English is closer, but Chinese to English is farther in L1-L2 orthographic similarity. The first steps toward a new study have been taken with the help of a PSLC summer intern, who coded the errors made in spelling by all L1 background learners. The pattern of errors can be characterized as qualitatively similar, differing across languages quantitatively, suggesting a generalized English spelling problem. This analysis has led to the hypothesis that feature focusing—attention to full spelling patterns—is different across the L1 backgrounds, which we will test in a training experiment that focuses attention on spelling patterns.<br />
<br />
''Second language vocabulary learning''. Another new project originating within the Refinement and Fluency cluster will study English vocabulary learning using REAP. Based on recent research by Balass on the trade-offs between explicit (dictionary-based) and implicit (inferences from text) instruction in learning new words by monolingual subjects (Bolger et al, 2008), the new work will apply this tradeoff idea to second language learners. The hypothesis is that allowing learners to view definitions is more effective after they have read a sentence containing the word to be learned. This hypothesis reflects ideas about assistance (giving a definition versus inferring it) and the assumption that learning word meanings from context depends on the overlapping memory traces established by specific encounters with the word (Bolger et al, 2008). REAP allows us to use authentic texts for studies with students of various L1 backgrounds learning English through reading texts in their areas of interest. In our experiments, we will vary the availability of definitions provided on-line as part of the text reading. <br />
<br />
''Explicit instruction and practice schedules in algebra and second language learning''. Foreign language learning in classrooms has stimulated research on explicit vs implicit instruction, with conclusions favoring the value of explicit instruction (Norris and Ortega, 2000). A major conclusion from PSLC work is that instruction that draws attention to critical valid features—“feature focusing”—is important in acquiring knowledge components for complex tasks. This conclusion has evidence from studies of L2 learning of the English grammar by Levin, Friskoff, Pavlik, studies of radical learning by Dunlap et al and by Pavlik, and by studies by Zhang and MacWhinney and by Liu et al on learning spoken syllables through pin-yin (alphabetic spellings). Projects in French dictation (MacWhinney) and French grammar (Presson & MacWhinney), Chinese dictation (Zhang & MacWhinney), algebra (Pavlik) and arithmetical computation (Fiez) also reflect this theme. Much of this work has been combined with completely general hypotheses about practice, based on Pavlik and Anderson (2005)’s model that describes the trade-off between the benefit of spaced practice and the cost of longer retention intervals brought by spacing. The resulting optimized practice schedule has been tested in several PSLC studies of vocabulary learning in Chinese (Pavlik, MacWhinney, Koedinger; reported in Pavlik, 2006), cues to French gender (Presson, MacWhinney, & Pavlik). Important is the generality of the optimization model. It applies to all domain content and studies in both algebra and second language learning have nee carried out. The new work in second language and in algebra builds on the synergies that have emerged from collaborations between domain researchers (e.g. MacWhinney) and Pavlik around experiments and models for optimizing practice. For Chinese, MacWhinney, Zhang, and Pavlik have developed a tutor for Chinese dictation and vocabulary learning that is being used in 18 sites. Data from these sites will be used to test the results of practice schedules and the form of instructional events (e.g. cues to gender in French) with longer term measures of robust learning. Because each of the tutors logs results to DataShop, the student records are a rich source of data for further study, including researchers beyond the PSLC. <br />
<br />
''Learning the logic of unconfounded experiments.'' We will extend our research on college level science topics (chemistry and physics) to middle school science, with a focus on the cross-domain topic of experimental design. The ability to design unconfounded experiments and make valid inferences from their outcomes is an essential skill in scientific reasoning. The key idea here is CVS: the Control of Variables Strategy. CVS is the fundamental idea underlying the design of unconfounded experiments from which valid, causal, inferences can be made. Its acquisition is an important step in the development of scientific reasoning skills , because it provides a strong constraint on search in the space of experiments (Klahr, 2000). The Tutor for Experimental Design (TED), developed by Klahr’s research team, builds on previous work studying the different paths of learning and transfer that result from teaching CVS using different instructional methods that span from direct instruction to discovery (Chen & Klahr, 1999) and show differences along the “physical-virtual” dimension (Triona & Klahr, 2007). We build on this by constructing of a semi-autonomous tutor, then developing a full computer based tutor in Pittsburgh middle school LearnLabs and carrying out in vivo experiments with TED. <br />
<br />
''Integration of knowledge components.'' Isolated knowledge components are not sufficient to produce fluent use of knowledge. Integrating knowledge components is important both in authentic practice that follows acquisition of knowledge components but, we hypothesize, also in the initial acquisition of components. Some of our prior work in coordinative learning establishes some of the conditions that favor multiple inputs during learning (e.g., Davenport et al in stochiometry). And experiments on fluency support the value of repeated practice in single-topic speaking as way to support fluency (de Jong, Halderman and Perfetti). In new work we propose to build on progress we have made in the study of fluency in language (de Jong et al) and arithmetic (Fiez). For example, we will follow the discovery by de Jong and colleagues that when L2 speakers repeat a speech on a single topic, their fluency scores increase on a number of measures. We will test the hypothesis that this results from the advantage of retrieving the same conceptual and lexical knowledge and overall speech plan on successive attempts, allowing fluency to increase on procedural components supported by chunking of words to phrases. We are accumulating a large database in the English LearnLab that will support the testing of additional hypotheses. The idea that some relatively simple learning (e.g. 3-5 knowledge components) is supported by integration from the beginning is being tested by Liu, Guan & Perfetti in a study of learning to read Chinese characters. The hypothesis is that when students write unfamiliar characters within the same 60-second time period that they read the character and try to learn its meaning and pronunciation, they will show more robust learning measured by reading tasks. Underlying this hypothesis is the idea that the representation of a character (or other objects that follow structural principles) can be perceptual-motor as well as visual.<br />
<br />
== Descendents ==<br />
<br />
*[[Klahr - TED]]<br />
*[[Perfetti - Read Write Integration]]<br />
*[[MacWhinney - Second Language Grammar]]<br />
*[[Juffs - Feature Focus in Word Learning]]<br />
*[[de Jong - Fluency]]<br />
*[[McLaren_AssistanceDilemmaDiscoveryLearning | McLaren - The Assistance Dilemma and Discovery Learning]]<br />
*[[Wylie - Intelligent Writing Tutor]]<br />
*[[Eskenazi - REAP]]<br />
<br />
=== References ===<br />
* Borek, A., McLaren, B.M., Karabinos, M., & Yaron, D. (2009). How Much Assistance is Helpful to Students in Discovery Learning? In U. Cress, V. Dimitrova, & M. Specht (Eds.), Proceedings of the Fourth European Conference on Technology Enhanced Learning, Learning in the Synergy of Multiple Disciplines (EC-TEL 2009), LNCS 5794, September/October 2009, Nice, France. (pp. 391-404). Springer-Verlag Berlin Heidelberg.</div>Bmclaren