|
|
Line 37: |
Line 37: |
| | | |
| === Background & Significance === | | === Background & Significance === |
− | Research Objectives/ Expected Benefits
| |
− | The PSLC’s in vivo methodology is focused to a large extent on establishing the validity and applicability conditions of instructional principles. Instructional design principles have emerged as a major component of sthe PSLC theoretical framework, with their own pages on the theory wiki. Currently, many PSLC studies focus on one, or at most two, principles at a time. This “vary-one-principle-at-a-time” approach is one of the pillars of the PSLC’s research methodology, and in many cases increases methodological rigor. It is important however that we also study the effect on robust learning of combinations of principles, for both practical and theoretical reasons.
| |
− | From the practical viewpoint of improving education, when carefully designing instruction according to established instructional principles, it is quite natural to consider application of multiple principles. Further, in order to design highly effective instruction, it may well be necessary, or at minimum, highly desirable, to combine principles, as most sets of educational design principles in the literature explicitly propose (Anderson et al, 1995; Quintana et al, 2004).
| |
− | For example, one might use worked examples in the context of instruction designed to support visual-verbal integration, thus combining the Worked Examples Principle and the Visual-Verbal Integration Principle. In the same instructional intervention, one might prompt students to self-explain instructional materials or reasoning steps, thus adding a third principle, the Self-Explanation Principle. One could expect the combination to be more effective than any of the three principles by itself (or even, than any pair of them). Similarly, combining these interventions with refined student models (Accurate knowledge decomposition principle) and better student models (Accurate knowledge estimates principle) may lead to even better learning still.
| |
− | Combining principles creates the possibility of developing an intervention whose effect on learning is greater than the individual principles could create. Individual principles may not lead to a very large effect sizes by themselves, but even when they do, their combination may lead to even larger effect sizes, if the principles complement each other. But is it reasonable to assume that principles will be synergistic? Will one indeed achieve dramatic improvements in instructional effectiveness by combining multiple principles, or will there be diminishing returns? It is important to know which principles are worth combining when designing instruction; also, it is important to know how effective their combination is. A practical science of learning should provide some guidance with these types of questions.
| |
− | Towards understanding this, in this project, we will take a set of instructional principles which appear to be complementary, inasmuch as they benefit robust learning in different fashions, and combine principles based on these interventions to create a “Greatest Hits” intervention. Studying these interventions in concert will give us the ability to make inferences as to whether the principles are complementary, as they appear to be. It will also enable us to see whether, in a situation explicitly designed to combine complementary interventions, the interventions are additive (total effect size equals effect size of individual interventions, added together), synergistic (total effect size is greater than effect size of individual interventions, added together), sub-additive (total effect size is better than best individual intervention, but not as good as individual interventions added together), or no-gain (total effect size is no better than best individual intervention).
| |
− | Additionally, by testing a combination of promising principles we enhance our understanding of the underlying reason why each principle is associated with learning gains. For example, if the existing theoretical rationale for two principles leads to the conclusion that they should be synergistic, evidence of redundancy will lead to and guide theory refinement. Conversely, when principles are found to be synergistic, this finding may call for a new interpretation of theory. For example, the finding that worked examples and tutored problem solving are synergistic, not redundant (Salden, Aleven, Renkl, & Schwonke, 2008) calls for a “tightening” of prior theoretical arguments that have been put forward to explain the effectiveness of worked examples as an adjunct to untutored problem solving. Where prior explanations focused on the lower cognitive load induced by worked examples, it now becomes necessary to explain why tutored problem solving (by which, in one popular view, open problem steps are turned into example steps on an as-needed basis, namely, when the student asks the tutor for hints) does not result in a comparable lowering of cognitive load. (We willnot attempt to resolve this issue here.) Lastly, resolving the Assistance Dilemma requires that we understand how different forms of support/assistance influence each other, if we are to have a predictive model of how to provide optimal assistance to each student.
| |
− | One key aspect of designing studies to investigate the synergy of principles is that effect size plays a key role. If two interventions (associated with different principles) respectively have effect sizes of 0.3 SD and 0.5 SD, we should expect that if they are synergistic, the effect size of the combined interventions will be greater than 0.5 SD, and possibly as high as 0.8 SD (though the maximum theoretically possible compound effect size, mathematically, will not exactly be the addition of the two effect sizes together). If the combined effect is not distinguishable from the larger of the two original effect sizes, we can conclude (if statistical power was sufficient) that the two interventions do not have synergistic effects.
| |
− | Hence, one key aspect of any study designed to investigate the combination of principles is a sample with sufficient statistical power. Unlike most cases, where the goal is to have sufficient statistical power to achieve statistical significance, in this case the goal is to have sufficient statistical power to precisely estimate effect size – a tighter goal which requires a larger sample.
| |
− | It is of course impossible to evaluate the effectiveness of all combinations of principles. Currently, the PSLC website lists 15 principles, requiring an enormous number of studies to investigate all combinations . The LearnLab infrastructure has not yet grown far enough to make that possible!
| |
− | We therefore adopt a “focus gambling” strategy (Bruner, Austin, & Goodnow, 1956), building on our prior “vary-one-thing-at-a-time” investigations of individual principles. That is, we select successful principles from previous PSLC work that should be complementary from a theoretical standpoint. We investigate whether the combination of these principles leads to a large effect size, when compared to the baseline tutor that does not implement these principles.
| |
− | If this approach turns out to be successful, it would demonstrate (a) that PSLC-style research can produce a large effect size, and (b) that Center Mode was essential to achieving that result. In particular, the result would be a combination of five different PSLC projects, including four in vivo studies and one enabling technology project (CTAT, which will be used for on-line assessment of student learning). This work would alsopave the way for similar projects within other LearnLabs and within the learning sciences more generally. In addition, we will have created a highly-effective tutor unit for the Geometry Cognitive Tutor, with a formula for how other tutor units could be similarly transformed.
| |
− |
| |
− | Interventions/ Principles to combine
| |
− | Our intervention will focus on five principles (all listed and elaborated on the PSLC theory wiki) that we have addressed in our prior projects in the Geometry LearnLab. Prior projects provide some of the empirical support for each of these principles, as indicated below. (For many of them, there is additional support from other projects as wellHere we list the principles and briefly describe the empirical evidence in favor of them that we generated in our previous work. The next section describes in more detail how each principle was implemented in the Geometry Cogntive Tutor.
| |
− |
| |
− | • Visual-verbal integration principle: Instruction that includes both visual and verbal information leads to more robust learning than instruction that includes verbal information alone, but only when the instruction supports learners as they coordinate information from both sources and the representations guide student attention to deep features.
| |
− |
| |
− | In a project entitled “Robust Learning in Visual/Verbal Problem Solving” we found that an interactive diagram led to deeper understanding of geometry principles and better long-term retention of geometry problem-solving skills than a format where the diagram is non-interactive and students interact with the tutor in a separate solution table (Butcher & Aleven, 2007; 2008). Diagram interaction was found to be especially powerful in supporting improved performance on transfer items that required student to coordinate visual and verbal information during assessment (e.g., tasks that required students to explain how conceptual (verbal) geometry principles applied to relevant (visual) geometry diagrams). Results also demonstrated a significant correlation (r = .51, p < .01) between performance on these “visual-verbal coordination” items and performance on problem-solving at delayed posttest.
| |
− |
| |
− | • Worked example principle: In contrast to the traditional approach of giving a list of homework (or seatwork) problems for students to solve, students learn more efficiently and more robustly when more frequent study of worked examples is interleaved with problem solving practice.
| |
− |
| |
− | In a project entitled “Do Worked Examples Improve Tutored Problem Solving?” We found that adding worked examples to the geometry Cognitive Tutor improved conceptual transfer, especially when faded in a manner adaptive to each individual student’s knowledge level (Salden, Aleven, Renkl, & Schwonke, 2008).
| |
− |
| |
− | • Prompted self-explanation principle: When students are given a worked example or text to study, prompting them to self-explain each step of the worked example or each line of the text causes higher learning gains than having them study the material without such prompting.
| |
− |
| |
− | • Complete and efficient practice principle: Student learning is more likely to be complete, and to occur efficiently, when selection of problems occurs in line with mastery learning driven by accurate knowledge-tracing estimates using an accurate model of the knowledge components in the domain.
| |
− |
| |
− | Mastery learning, where students are given problems relevant to a knowledge component until they demonstrate mastery – and not giving further practice after that point – has been shown to lead to positive learning gains (e.g. Bloom, 1968; Corbett, 2001; Cen, Koedinger, & Junker, 2007). In particular, robust learning is not possible until complete learning has occurred, as it is difficult for knowledge to be robust if the knowledge is not completely acquired. Mastery learning, within Cognitive Tutors, occurs separately for each student and knowledge component, in line with the Knowledge Decomposition principle. Successfully guiding every student through complete and efficient practice is dependent upon mappings between tutor items and knowledge components that represent psychological reality, and accurate inference as to when a student knows each knowledge component.
| |
− |
| |
− | This principle breaks down into two sub-principles, which concern two fashions in which complete and efficient practice can be achieved.
| |
− |
| |
− | o Accurate knowledge decomposition principle: Complete and efficient learning, within mastery learning, is promoted when assessments of mastery are based upon decomposition of the domain into knowledge components which accurately represent the domain.
| |
− |
| |
− | Within most existing methods for modeling student knowledge acquisition within intelligent tutoring systems (e.g. Martin & VanLehn, 1995; Corbett & Anderson, 1995; Beck & Chang, 2007; Baker, Corbett, & Aleven, 2008), student knowledge is decomposed into a set of knowledge components, which represent the skills or concepts the student needs to learn. Inaccurately combining skills leads to “spiky” performance graphs, where a single knowledge component in a model actually represents multiple knowledge components (Anderson, Conrad, & Corbett, 1989). In cases where this occurs, it is likely that some knowledge components in the composite knowledge component may receive too much practice whereas other knowledge components may receive too little practice. Methods now exist for extracting more accurate decompositions of domain structure, based on the patterns of students’ errors (Barnes, 2003; Cen, Koedinger & Junker, 2006). Initial work has shown promising but not conclusive results which indicate that student learning is improved by these refined domain models.
| |
− |
| |
− | o Accurate knowledge estimates principle: Complete and efficient learning is promoted, within mastery learning, when mastery is assessed with knowledge models which accurately predict students’ knowledge of each knowledge component at each moment, measured by their ability to predict future performance.
| |
− |
| |
− | Mastery learning (Bloom, 1968; Corbett, 2001) depends upon accurate estimation of the probability that a student knows a given knowledge component at a given time. Within the Bayesian Knowledge Tracing approach used in Cognitive Tutors (Corbett & Anderson, 1995), four parameters govern the predictions of student knowledge. If these parameters are inappropriate (fit using a less accurate method, or never fit at all), estimates of student knowledge will be inaccurate, and students may receive too little practice or too much practice (e.g. Cen, Koedinger, & Junker, 2007). More accurate estimates, by contrast, increase the odds that students’ learning will be both efficient and complete. In a project entitled “How Content and Interface Features Affect Student Choices Within the Learning Space” we developed methods for estimating student knowledge with contextual estimates of guessing and slipping, and showed that these methods make student models significantly more accurate at estimating student knowledge (Baker, Corbett, & Aleven, 2008), reducing the probability of under-practice or over-practice.
| |
− | Why should these principles be synergistic?
| |
− | In the current section, we consider theoretical reasons that suggest that the five principles will be synergistic or additive. The first three principles primarily affect the robustness of student learning, whereas the last two primarily affect the efficiency of learning. The two groups of principles should therefore be synergistic. We also expect synergy within each group. While we are not testing individual pairs of principles (due to the statistical power possible with the available numbers of study participants), we can make inference about the degree of synergy from the eventual effect size obtained. This research is conceptualized as a first step toward understanding the potential for synergistic effects among PSLC instructional principles, and a test of what level of benefit to learners can be expected by the integration of PSLC theory into multi-principle interventions.. The proposed research will test whether the theoretical prediction of synergy materializes, as a first step toward “meta-principles” that describe under what circumstances one can expect combinations of principles to be synergistic.
| |
− | The combination of an interactive diagram and worked examples should synergistically support sense-making during geometry problem solving, because these features support complementary aspects of the sense-making process. Worked examples encourage self-explanation, and visual interaction in an interactive diagram supports student focus on deep, relevant features of the problem and coordination between visual and verbal sources of information. Together, worked examples should increase the frequency of self-explanation and visual interaction should increase the quality of these explanations. Think-aloud research conducted in previous PSLC studies provides empirical support for these hypothesized effects. Protocol data collected for worked-example tutoring suggest that worked examples encourage conceptually-focused explanations during problem solving (Schwonke et al., 2007). Think-aloud data collected for visually-interactive tutors (Butcher & Aleven, 2008, January) demonstrated that visual interaction did not influence the frequency with which students engaged in conceptual self-explanations, but it did influence the accuracy of explanations: students who interacted with geometry diagrams during tutoring made significantly fewer erroneous self-explanations and were less likely to express confusion after a correct answer had been accepted by the tutor.
| |
− | Although both principles are theorized to support self-explanation, it is also possible that these principles may not be synergistic if any one or two interventions “max out” the quality and quantity of self-explanation of which students are capable. However, there is likely to be no detrimental effect from the combination of these different principles. That is, even if synergistic support is not achieved, there is unlikely to be an problematic interactions among these two principles.
| |
− | At the same time that sense-making processes are supported, student learning will be made more efficient by incorporating two techniques to improve student modelling (knowledge model restructuring and contextual knowledge tracing, both of which relate to the tutor’s mastery learning method). First, the mapping between knowledge components and items will be made more accurate using combinatorial search within the space of difficulty factors models (Cen, Koedinger, & Junker, 2006). Second, the parameters of Bayesian Knowledge-Tracing (Corbett & Anderson, 1995) will be adjusted using contextual estimates of guess and slip (Baker, Corbett, & Aleven, under review) and by improving estimates of the initial probability that a student knows each knowledge component (Cen, Koedinger, & Junker, 2007). These optimizations should work synergistically, since they remove the need for the knowledge-tracing parameters to compensate for imperfections in the knowledge component model. Increasing the accuracy of student modelling in this fashion will prevent over-practice and increase learning efficiency (cf. Cen, Koedinger, & Junker, 2007), while focusing students on the knowledge components most in need of practice (cf. Baker, Corbett, & Koedinger, 2004).
| |
− | In addition, improved knowledge estimates will enhance/increase/strengthen the effectiveness of adaptively faded worked examples, since that technique depends on accurate estimates of student knowledge: example steps related to a knowledge component are faded based on mastery of the knowledge component, assessed through Bayesian Knowledge Tracing.
| |
− | Third, combining multiple types of optimizations of student models will help us understand to what extent an optimal model of student learning needs to have each of its aspects optimized, shaping future work in student knowledge modeling.
| |
| | | |
| === Glossary === | | === Glossary === |