https://learnlab.org/research/wiki/api.php?action=feedcontributions&user=Sitamoyer&feedformat=atomLearnLab - User contributions [en]2024-03-29T11:10:22ZUser contributionsMediaWiki 1.31.12https://learnlab.org/wiki/index.php?title=Computational_Modeling_and_Data_Mining&diff=12155Computational Modeling and Data Mining2011-08-30T11:05:23Z<p>Sitamoyer: </p>
<hr />
<div>==Introduction==<br />
One of the greatest impacts of technology on 21st century education will be the scientific advances made possible by mining the vast explosion of learning data that is coming from educational technologies. The Computational Modeling and Data Mining (CMDM) Thrust is pursuing the scientific goal of using such data to advance precise, computational theories of how students learn academic content. We will accomplish this by drawing on and expanding the enabling technologies we have already built for collecting, storing, and managing large-scale educational data sets. For example, [http://www.learnlab.org/technologies/datashop/index.php DataShop] will grow to include larger and richer datasets coming not only from our LearnLab courses but also from thousands of schools using the Cognitive Tutor courses and from additional contexts where we can collect student dialogue data, measures of motivation and affect, and layered assessments of both student knowledge and metacognitive competencies. This growth in the amount, scope, and richness of learning data will make the [http://www.learnlab.org/technologies/datashop/index.php DataShop] an even more fertile cyber-infrastructure resource for learning science researchers to use. But to realize the full potential of that resource – to make new discoveries about the nature of student learning – researchers need new and powerful knowledge discovery tools – innovations that will occur within the CMDM Thrust.<br />
<br />
The CMDM Thrust will pursue three related areas: 1) domain-specific models of student knowledge representation and acquisition, 2) domain-general models of [[Metacognition and Motivation|metacognitive, motivational]], and [[Social_and_Communicative_Factors_in_Learning|social processes]] as they impact student learning, and 3) predictive engineering models and methods that enable the design of large-impact instructional interventions.<br />
<br />
== Developing Better Cognitive Models of ''Domain-Specific Content''==<br />
Understanding and engineering better human learning of complex academic topics is dependent upon accurate and usable models of the domains students are learning that result from [[cognitive task analysis]]. However, domain modeling has been a continual challenge, as student knowledge is not directly observable and its structure is often hidden by our “expert blind spots” ([[User:Koedinger|Koedinger]] & Nathan, 2004; Nathan & Koedinger, 2000). Key research questions are: a) Can the discovery of a domain’s knowledge structure be automated? b) Do [[knowledge component]] models provide a precise and predictive theory of [[transfer]] of learning? c) Can we integrate separate methods for modeling memory, learning, transfer, and guessing/slipping, to optimize models of student knowledge, and in turn optimize students' effective time on task?<br />
<br />
One of the planned projects for Year 5 will build on our promising past results, obtained with the Cen, Koedinger, and Junker (2006) Learning Factor Analysis (LFA) algorithms. Specifically, we will, by broadening the generalizability of this domain-modeling approach, incorporating new knowledge-discovery methods, and increasing the level of automation of knowledge analysis so as to engage more researchers in applying this technique to even more content domains. To more fully automate the discovery of knowledge components, Pavlik will use Partially Ordered Knowledge Structures (POKS) (cf. Desmarais, et al., 1995) to build more complete and accurate representations of map the given domain and to capture the prerequisite relationships between hypothesized knowledge components and their predictions of performance. The models that this work produces will become the input to algorithms that can optimize for each student the amount of practice and ideal sequencing of instructional events for acquiring each knowledge component. These approaches will be applied to tutors across domains, including math, science, and language (particularly for English vocabulary and article learning domains). A related project will investigate the impact of combining LFA model refinement with improved moment-by-moment knowledge modeling, using a probabilistic model that uses student interaction data to estimate whether a student’s correct answer or error informs us about their knowledge or simply represents a guess or slip (Baker, Corbett & Aleven, 2008). In addition to clear applied benefits, these projects will advance a more precise science of reasoning and learning as it occurs in academic settings.<br />
<br />
==Developing Models of ''Domain-General'' Learning and Motivational Processes==<br />
Our work toward developing high-fidelity models of student learning has involved capturing, quantifying, and modeling domain-general mechanisms that impact students’ learning and the robustness of that learning. In the first four years of the PSLC, our models have moved beyond addressing domain-specific cognition (e.g., the cognitive models behind the intelligent tutors for Physics, Algebra, and Geometry) to capture metacognitive aspects of learning (e.g., Aleven et al.’s, 2006, detailed model of help-seeking behavior), general mechanisms of learning (Matsuda et al., 2007) and motivational and affective constructs such as students’ off-task behavior (Baker, 2007), and whether a student is “gaming the system” (Baker et al., 2008; shown to be associated with boredom and confusion in Rodrigo et al, 2007). <br />
<br />
A key Year 5 effort will extend the [http://www.cs.cmu.edu/~mazda/SimStudent SimStudent] project both as a theory-building tool and as an instruction-informing tool (Matsuda et al., 2008). We will use SimStudent to make predictions about the nature of students’ generalization errors and the effects of prior knowledge on students’ learning and transfer, testing these predictions using human-learning data in DataShop (Matsuda et al., 2009; see [[Application of SimStudent for Error Analysis]]). While psychological and neuroscientific models typically produce only reaction time predictions, these models will predict specific errors and forecast the pattern of reduction in those errors . Developing a system that integrates domain-general processes to produce human-like errors in inference, calculation, generalization, and the use of feedback/help/instructions would be both a major theoretical breakthrough, and an extremely useful tool for other researchers. <br />
<br />
Looking forward to the renewal period, an important project will be to develop machine-learned models of student behaviors at a range of time scales, from momentary affective states like boredom and frustration (cf. Kapoor, Burleson, & Picard, 2007) to longer-term motivational and metacognitive constructs such as performance vs. learning orientation and self-regulated learning (Azevedo & Cromley, 2004; Elliott & Dweck, 1988; Pintrich, 2000; Winne & Hadwin, 1998). We will expand prior PSLC work by Baker and colleagues (Rodrigo et al, 2007, 2008; Baker et al, 2008) to explore causal connections between these models and existing models of motivation-related behaviors such as gaming the system and off-task behavior. We will pursue models of differences in cognitive, affective, social, and motivational factors as they relate to classroom culture, schools, and teachers. These proposed models would be, to our knowledge, the first systematic investigations of school-level effects factors affectingon fine-grained states of student learning.<br />
<br />
==Developing Predictive ''Engineering Models'' to Inform Instructional Event Design==<br />
A fundamental theoretical problem for the sciences of learning and instruction is what we have called “the [[assistance dilemma|Assistance Dilemma]]”: optimizing the amount and timing of instruction so that it is neither too little nor too much, and neither too early nor too late (Koedinger & Aleven, 2007; Koedinger, 2008; Koedinger, Pavlik, McLaren, & Aleven, 2008). Two theoretical advances are necessary before we can resolve these broad questions. First, we need a clear delineation of the multiple possible dimensions of instructional assistance (e.g., worked examples, feedback, on-demand hints, self-explanation prompts, or optimally-spaced practice trials). We broadly define assistance to include not only direct verbal instruction, but also instructional scaffolds that prompt student thinking or action as well as implicit affordances or difficulties in the learning environment. Second, we need precise, predictive models of when increasing assistance (reducing difficulties) or decreasing assistance (increasing difficulties) is best for optimal robust learning. Existing theoretical work on this topic – like [[cognitive load]] theory (e.g., Sweller, 1994; van Merrienboer & Sweller, 2005), desirable difficulties (Bjork, 1994), and cognitive apprenticeship (Collins, Brown, & Newman, 1989) -- have not reached the stage of precise computational modeling that can be used to make a priori predictions about optimal levels of assistance. <br />
<br />
We will use DataShop log data to make progress on the Assistance Dilemma by targeting dimensions of assistance one at a time and creating parameterized mathematical models that predict the optimal level of assistance to enhance robust learning (cf. Koedinger et al., 2008). Such a mathematical model has been achieved for the practice-interval dimension (changing the amount of time between practice trials), and progress is being made on the example-problem dimension (changing the ratio of examples to problems). These models generate the inverted-U shaped function curve characteristic of the Assistance Dilemma as a function of particular parameter values that describe the instructional context. These models are created and refined using student learning data from DataShop. We hypothesize that this form approach will work for other dimensions of assistance. These models will address the limitations of current theory indicated above by generating ''a priori'' predictions of what forms of assistance or difficulty will enhance learning. Further, these models will provide the basis for on-line algorithms that adapt to individual student differences and changes over time, optimizing the assistance provided to each student for each knowledge component at each time in their learning trajectory.<br />
<br />
== [[CMDM Meetings]] ==<br />
<br />
== Descendants ==<br />
<br />
To create a new project page, enclose your project name in a double set of brackets. Details for a project format may be [[ Project_Page_Template_and_Creation_Instructions | found here.]]<br />
<br />
<br />
*[[Gordon - Temporal learning for EDM]]<br />
*[[Koedinger - Discovery of Domain-Specific Cognitive Models]]<br />
*[[Koedinger - Toward a model of accelerated future learning]]<br />
*[[Baker - Building Generalizable Fine-grained Detectors]]<br />
*[[Chi - Induction of Adaptive Pedagogical Tutorial Tactics]]<br />
*[[Baker - Closing the Loop]]<br />
*[[Pavlik and Koedinger - Generalizing the Assistance Formula]]<br />
*[[Mayer_and_McLaren_-_Social_Intelligence_And_Computer_Tutors | McLaren and Mayer - Social Intelligence and Learning from "polite" tutors]]<br />
*[[Application of SimStudent for Error Analysis | Matsuda - Application of SimStudent for Error Analysis]]<br />
*[[Penn - Discovering a Domain Model for Organic Chemistry]]<br />
<br />
== References ==<br />
* Azevedo, R., & Cromley, J. G. (2004). Does training on self-regulated learning facilitate students' learning with hypermedia? Journal of Educational Psychology, 96(3), 523-535.<br />
* Baker, R.S.J.d. (2007) Modeling and Understanding Students' Off-Task Behavior in Intelligent Tutoring Systems. Proceedings of ACM CHI 2007: Computer-Human Interaction, 1059-1068.<br />
* Baker, R.S.J.d., Corbett, A.T., Aleven, V. (2008) More Accurate Student Modeling Through Contextual Estimation of Slip and Guess Probabilities in Bayesian Knowledge Tracing. Proceedings of the 9th International Conference on Intelligent Tutoring Systems, 406-415<br />
* Baker, R.S.J.d., Corbett, A.T., Roll, I., Koedinger, K.R. (2008) Developing a Generalizable Detector of When Students Game the System. User Modeling and User-Adapted Interaction, 18, 3, 287-314.<br />
* Baker, R., Walonoski, J., Heffernan, N., Roll, I., Corbett, A., Koedinger, K. (2008) Why Students Engage in "Gaming the System" Behavior in Interactive Learning Environments. Journal of Interactive Learning Research, 19 (2), 185-224.<br />
* Bjork, R.A. (1994). Memory and metamemory considerations in the training of human beings. In J. Metcalfe and A. Shimamura (Eds.) Metacognition: Knowing about knowing. (pp.185-205). Cambridge, MA: MIT Press.<br />
* Collins, A., Brown, J. S., & Newman, S. E. (1989). Cognitive apprenticeship: Teaching the crafts of reading, writing, and mathematics. In L. B. Resnick. Knowing, Learning, and Instruction: Essays in Honor of Robert Glaser (pp. 453-494). Hillsdale, NJ: Erlbaum.<br />
* Desmarais, M., Maluf, A., Liu, J. (1995) User-expertise modeling with empirically derived probabilistic implication networks. User Modeling and User-Adapted Interaction, 5 (3-4), 283-315.<br />
* [[User:Koedinger|Koedinger]], K. R. & Aleven, V. (2007). Exploring the assistance dilemma in experiments with Cognitive Tutors. Educational Psychology Review, 19 (3): 239-264.<br />
* Koedinger, K. R., Pavlik Jr., P. I., McLaren, B. M., & Aleven, V. (2008). Is it better to give than to receive? The assistance dilemma as a fundamental unsolved problem in the cognitive science of learning and instruction. In B.C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. (pp.). Austin, TX: Cognitive Science Society.<br />
* Matsuda, N., Cohen, W. W., Sewall, J., Lacerda, G., & Koedinger, K. R. (2008). Why tutored problem solving may be better than example study: Theoretical implications from a simulated-student study. In B. P. Woolf, E. Aimeur, R. Nkambou & S. Lajoie (Eds.), Proceedings of the International Conference on Intelligent Tutoring Systems (pp. 111-121). Heidelberg, Berlin: Springer.<br />
* Matsuda, N., Cohen, W. W., Sewall, J., Lacerda, G., & Koedinger, K. R. (2007). Evaluating a simulated student using real students data for training and testing. In C. Conati, K. McCoy & G. Paliouras (Eds.), Proceedings of the international conference on User Modeling (LNAI 4511) (pp. 107-116). Berlin, Heidelberg: Springer.<br />
* McLaren, B.M., Lim, S., & Koedinger, K.R. (2008). When and How Often Should Worked Examples be Given to Students? New Results and a Summary of the Current State of Research. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 2176-2181). Austin, TX: Cognitive Science Society. <br />
* Nathan, M. J. & Koedinger, K.R. (2000). Teachers' and researchers' beliefs of early algebra development. Journal for Research in Mathematics Education, 31 (2), 168-190<br />
* Rodrigo, M.M.T., Baker, R.S.J.d., d'Mello, S., Gonzalez, M.C.T., Lagud, M.C.V., Lim, S.A.L., Macapanpan, A.F., Pascua, S.A.M.S., Santillano, J.Q., Sugay, J.O., Tep, S., Viehland, N.J.B. (2008) Comparing Learners' Affect While Using an Intelligent Tutoring Systems and a Simulation Problem Solving Game. Proceedings of the 9th International Conference on Intelligent Tutoring Systems, 40-49. <br />
* Rodrigo, M.M.T., Baker, R.S.J.d., Lagud, M.C.V., Lim, S.A.L., Macapanpan, A.F., Pascua, S.A.M.S., Santillano, J.Q., Sevilla, L.R.S., Sugay, J.O., Tep, S., Viehland, N.J.B. (2007) Affect and Usage Choices in Simulation Problem Solving Environments. Proceedings of Artificial Intelligence in Education 2007, 145-152.<br />
* Sweller, J. (1994). Cognitive load theory, learning difficulty and instructional design. Learning and Instruction, 4, 295–312.<br />
* [http://www.ou.nl/eCache/DEF/7/332.html Van Merriënboer, J.J.G.], & Sweller, J. (2005). Cognitive load theory and complex learning: Recent developments and future directions. Educational Psychology Review, 17(1), 147-177.<br />
[http://custom-essay-writing-service.org/index.php custom writing]</div>Sitamoyerhttps://learnlab.org/wiki/index.php?title=Collaboration_scripts&diff=12154Collaboration scripts2011-08-30T11:05:00Z<p>Sitamoyer: </p>
<hr />
<div>Collaboration scripts (e.g., King, Staffieri, & Adelgais, 1998; Fischer, Bruhn, Graesel, & Mandl, 2002; Soller, 2004; Rummel & Spada, 2005; Elaine B. Coleman, 1998; Bereiter & Scardamalia, 1989) are an [[instructional method]] that structures the [[collaboration|Collaboration]] process by guiding the interacting partners through a sequence of interaction phases with designated activities and roles. This method often involves first teaching students how to collaborate then providing prompts or sentence openers that scaffold the scripted collaboration.<br />
<br />
Scripts are expected to promote learning by prompting cognitive, [[metacognition|metacognitive]] and social processes that might otherwise not occur, i.e. students are more likely to traverse useful learning paths than in unscripted collaboration. For example, the script prompts interacting partners to engage in activities like posing questions, providing explanations, and giving feedback.<br />
<br />
Dillenbourg and Jermann (2006) describe different core scripts, i.e. schemata lying at the heart of any particular collaboration script. They distinguish between jigsaw, conflict and reciprocal script approaches. A reflection phase is often included in scripts.<br />
<br />
This method of instruction includes simple prompting, e.g., for self-explanation while a pair is studying an example [[Hausmann_Study2|Hausmann & VanLehn, 2007]].<br />
<br />
We do not include reciprocal teaching under this category as it includes modeling and other teacher-led activities. <br />
<br />
*Dillenbourg, P. & Jermann, P. (2006). Designing integrative scripts. In F. Fischer, I. Kollar, H. Mandl &, J. Haake, ''Scripting computer-supported communication of knowledge. Cognitive, computational, and educational perspectives'' (pp. 259-288). New York: Springer. <br />
<br />
[[Category:Glossary]]<br />
[[Category:Interactive Communication]]<br />
[[Category:Scripted Collaborative Problem Solving]]<br />
[[Category:Independent Variables]]<br />
[http://custom-essay-writing-service.org/index.php custom essay writing]</div>Sitamoyerhttps://learnlab.org/wiki/index.php?title=Conceptual_tasks&diff=12153Conceptual tasks2011-08-30T11:04:37Z<p>Sitamoyer: </p>
<hr />
<div>A conceptual task is intended by the task's designer to be achievable by applying only [[conceptual knowledge]], and not by applying [[procedural]] knowledge. Moreover, students may not achieve a task by applying the knowledge that the task was designed to tap. When used on a post-test, conceptual tasks provide one important way of measuring [[transfer]] and thus [[robust learning]]. In contrast, [[procedural tasks]] would be part of a [[normal post-test]] that measures learning that may or may not be robust.<br />
<br />
Rittle-Johnson & Siegler (1998, pg. 77) distinguish procedural from conceptual knowledge as follows: "We define conceptual knowledge as understanding of the principles that govern the domain and interrelations between pieces of knowledge in the domain (although this knowledge does not need to be explicit). In the literature this type of knowledge is referred to as understanding or principled knowledge. We define procedural as action sequences for solving problems. In the literature this type of knowledge is sometimes referred to as skills, algorithms or [[strategies]]." <br />
<br />
Examples:<br />
* Given a Chinese tone and/or character, generate its English translation<br />
* Given a transformation of an equation, indicate whether the distributive, associative or commutative law justifies the transformation.<br />
* Ask a beginning chemistry student to explain the term "valence" using Bohr's model of the atom.<br />
* Given a physical situation, such as a baseball flying straight up after being thrown by a person, ask the student what forces are acting on the moving object. <br />
<br />
Non-examples:<br />
* Given an algebraic equation with a single occurrence of the unknown, solve it.<br />
<br />
Borderline examples:<br />
* Given 5+3, indicate that the answer is 8. For advanced learners, this is done by retrieving a fact, which is a kind of low level concept. For beginning learners, this is done by a counting strategy, so the task taps procedural knowledge.<br />
<br />
Rittle-Johnson, B. & Siegler, R. S. (1998) The relation between conceptual and procedural knowledge in learning mathematics. In C. Donlan (Ed.) The Development of Mathematical Skills. East Sussex, UK: Psychology Press. <br />
<br />
[[Category:Glossary]]<br />
[[Category:Dependent Variables]]<br />
[[Category:Coordinative Learning]]<br />
[[Category:PSLC General]]<br />
[http://custom-essay.ws/index.php essay writing]</div>Sitamoyerhttps://learnlab.org/wiki/index.php?title=Cognitive_task_analysis&diff=12152Cognitive task analysis2011-08-30T11:04:24Z<p>Sitamoyer: </p>
<hr />
<div>Clark and Estes (1996) define Cognitive Task Analysis (CTA) as "the general term used to describe a set of methods and techniques that specify the cognitive structures and processes associated with task performance. The focal point is the underlying cognitive processes, rather than observable behaviors. Another defining characteristic of CTA is an attempt to describe the differences between novices and experts in the development of knowledge about tasks (Redding, 1989)."<br />
<br />
The definition of CTA suggested by this quote from Feldon (2007) "the use of structured knowledge elicitation techniques (e.g., cognitive task analysis)" emphasizes the approach of eliciting knowledge from experts, presumably through more direct approaches of interviews or [[think-aloud data|think alouds]]. Whether the more indirect approach of using extensive student performance data to do [[knowledge component]] analysis via educational data mining would be a version of CTA is thus perhaps open.<br />
<br />
See also the [[knowledge decomposability hypothesis]].<br />
<br />
Clark and Estes (1996) highlight the general value of task analysis:<br />
"Prior to task analysis, job training was accomplished almost exclusively by observational learning on-the-job ("sit by Nelly") and formal apprenticeships. Both these methods required a great deal of time and produced variable results for a couple of reasons. First, the role model did not always know what behaviors to highlight for the learner, for reasons discussed later. Second, some very critical steps or decisions occur very rarely and so are inefficient to observe in real-time."<br />
<br />
They later indicate why they believe the role model (and, in some cases, the instructor, instructional designer, or researcher) does not "know what behaviors to highlight for the learner": "While experts often possess an abundance of declarative knowledge about their specialty, the vast majority of their knowledge lies in their automated procedural knowledge." <br />
<br />
==== Evidence that CTA can used to improve instruction ====<br />
* "when the mental models used by experts can be elicited and represented by CTA, there is good evidence that it can be captured and taught to others, and that even a skilled performer can improve with an expert model (Staszewski, 1988)." (Clark and Estes, 1996)<br />
* Biederman & Shiffrar's (1987) demonstration of bringing novices to near expert performance with a short instructional activity (akin to [[feature focusing]] based on a deep analysis of the cognitive (and perceptual) task experts perform when determining the gender of day-old chicks. <br />
<br />
* From Clark, R. E., Feldon, D., van Merriënboer, J., Yates, K., & Early, S. (2007):<br />
"Several studies provide direct evidence for the efficacy of CTA-based instruction.<br />
In a study of medical school surgical instruction, an expert surgeon taught a procedure<br />
(central venous catheter placement and insertion) to first-year medical interns in a<br />
lecture/demonstration/practice sequence (Maupin, 2003; Velmahos et al., 2004). The<br />
treatment group’s lecture was generated through a CTA of two experts in the procedure.<br />
The control group’s lecture consisted of the expert instructor’s explanation as a free<br />
recall, which is the traditional instructional practice in medical schools. Both conditions<br />
allotted equal time for questions, practice, and access to equipment. The students in each<br />
condition completed a written posttest and performed the procedure on multiple human<br />
patients during their internships. Students in the CTA condition showed significantly<br />
greater gains from pretest to posttest than those in the control condition. They also<br />
outperformed the control group when using the procedure on patients in every measure of<br />
performance, including an observational checklist of steps in the procedure, number of<br />
needle insertion attempts needed to insert the catheter into patients veins, frequency of<br />
required assistance from the attending physician, and time-to completion for the<br />
procedure.<br />
<br />
Similarly, Schaafstal et al. (2000) compared the effectiveness of a pre-existing<br />
training course in radar system troubleshooting with a new version generated from<br />
cognitive task analyses. Participants in both versions of the course earned equivalent<br />
scores on knowledge pretests. However, after instruction, students in the CTA-based<br />
course solved more than twice as many malfunctions, in less time, as those in the<br />
traditional instruction group. In all subsequent implementations of the CTA-based<br />
training design, the performance of every student cohort replicated or exceeded the<br />
performance advantage over the scores of the original control group.<br />
<br />
Merrill (2002) compared CTA-based direct instruction with a discovery learning<br />
(minimal guidance) format and a traditional direct instruction format in spreadsheet use. The CTA condition provided direct instruction based on strategies elicited from a<br />
spreadsheet expert. The discovery learning format provided authentic problems to be<br />
solved and made an instructor available to answer questions initiated by the learners. The<br />
traditional direct instruction format provided explicit information on skills and concepts<br />
and guided demonstrations taken from a commercially available spreadsheet training<br />
course. Scores on the posttest problems favored the CTA-based instruction group (89%<br />
vs. 64% for guided demonstration vs. 34% for the discovery condition). Further, the<br />
average times-to-completion also favored the CTA group. Participants in the discovery<br />
condition required more than the allotted 60 minutes. The guided demonstration<br />
participants completed the problems in an average of 49 minutes, whereas the<br />
participants in the CTA-based condition required an average of only 29 minutes.<br />
<br />
Generalizability of CTA-based training benefits. Lee (2004) conducted a meta-<br />
analysis to determine how generalizable CTA methods are for improving training<br />
outcomes across a broad spectrum of disciplines. A search of the literature in 10 major<br />
academic databases (Dissertation Abstracts International, Article First, ERIC, ED Index,<br />
APA/PsycInfo, Applied Science Technology, INSPEC, CTA Resource, IEEE,<br />
Elsevier/AP/Science Direct), using keywords such as “cognitive task analysis,”<br />
knowledge elicitation,” and “task analysis,” yielded 318 studies. Seven studies qualified,<br />
based on the qualifications of: Training based on CTA methods with an analyst,<br />
conducted between 1985 and 2003, and reported pre and post test measures of training<br />
performance. A total of 39 comparisons of mean effect size for pre- and posttest<br />
differences were computed from the seven studies. Analysis of the studies found effect<br />
sizes between .91 and 2.45, which are considered to be large (Cohen, 1992). The mean<br />
effect size was d=+1.72, and the overall percentage of post-training performance gain<br />
was 75.2%. Results of a chi-square test of independence on the outcome measures of the<br />
pre- and posttests (χ2 = 6.50, p < 0.01) indicated that CTA most likely contributed to the<br />
performance gain."<br />
<br />
<br />
==== References ====<br />
* Biederman, I., & Shiffrar, M. M. (1987). Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4). pp. 640-645.<br />
* Clark, R. E. & Estes, F. (1996). Cognitive task analysis. International Journal of Educational Research. 25(5). 403-417.<br />
* Clark, R. E., Feldon, D., van Merriënboer, J., Yates, K., & Early, S. (2007). Cognitive task analysis. In J. M. Spector, M. D. Merrill, J. J. G. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 577–593). Mahwah, NJ: Lawrence Erlbaum Associates. <br />
* Feldon (2007). The Implications of Research on Expertise for Curriculum and Pedagogy. Educ Psychol Rev, 19:91.<br />
* Glaser, R., Lesgold, A., Lajoie, S., Eastman, R., Greenberg, L., Logan, D., Magone, M., Weiner, A., Wolf, R., Yengo, L. (1985). Cognitive task analysis to enhance technical skills training and assessment. (Final Report to the Air Force Human Resources Laboratory on Contract No. F41689-8v3-C-0029.) Pittsburgh, PA: Learning Research and Development Center, University of Pittsburgh. <br />
* Lee, R. L. (2003). Cognitive task analysis: A meta-analysis of comparative studies. Unpublished doctoral dissertation, University of Southern California, Los Angeles, California. <br />
* Redding, R.E. (1989). Perspectives on cognitive task analysis: The state of the state of the art. Proceedings of the Human Factors Society 33rd Annual Meeting.<br />
* Staszewski, J.J. (1988). Skilled memory and expert mental calculation. In M.T.H. Chi, R. Glaser, and M.J. Farr (Eds.), The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum. <br />
<br />
[[Category:Glossary]]<br />
[[Category:PSLC General]]<br />
[http://custom-essay-writing-service.org/index.php essay writing service]</div>Sitamoyerhttps://learnlab.org/wiki/index.php?title=Cognitive_load&diff=12151Cognitive load2011-08-30T11:04:00Z<p>Sitamoyer: </p>
<hr />
<div>Cognitive load refers to the demands on [[working memory]] during problem solving, thinking and reasoning (including perception, memory, language, etc.).<br />
<br />
Most would agree that people learn better when they can build on what they already understand. But the more things a person has to learn in a short amount of time, the more difficult it is to process information in [[working memory]].<br />
<br />
The notion of cognitive load has been used by Sweller and colleagues (see below) as the theoretical rationale for designing or choosing between [[instructional method]]s such that more learning is achieved when the method reduces extraneous cognitive load. For instance, interleaving [[worked examples]] between problem solving activities has led to better learning and is claimed to do so because the worked examples relieve the extraneous cognitive load experienced during problem solving because of the need to store problem solving goals and subgoals. <br />
<br />
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257-285.<br />
<br />
Sweller, J., van Merrienboer, J. J. G., & Paas, F. G. W. C. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10(3), 251-296.<br />
<br />
Sweller, J. (1999). Instructional design in technical areas. Melbourne, Australia: ACER Press.<br />
<br />
[[Category:Glossary]]<br />
[[Category:PSLC General]]<br />
[http://cvresumewritingservices.org/ resume writing]</div>Sitamoyer