Difference between revisions of "Cognitive task analysis"
Line 84: | Line 84: | ||
[[Category:Glossary]] | [[Category:Glossary]] | ||
[[Category:PSLC General]] | [[Category:PSLC General]] | ||
− |
Latest revision as of 15:22, 31 August 2011
Clark and Estes (1996) define Cognitive Task Analysis (CTA) as "the general term used to describe a set of methods and techniques that specify the cognitive structures and processes associated with task performance. The focal point is the underlying cognitive processes, rather than observable behaviors. Another defining characteristic of CTA is an attempt to describe the differences between novices and experts in the development of knowledge about tasks (Redding, 1989)."
The definition of CTA suggested by this quote from Feldon (2007) "the use of structured knowledge elicitation techniques (e.g., cognitive task analysis)" emphasizes the approach of eliciting knowledge from experts, presumably through more direct approaches of interviews or think alouds. Whether the more indirect approach of using extensive student performance data to do knowledge component analysis via educational data mining would be a version of CTA is thus perhaps open.
See also the knowledge decomposability hypothesis.
Clark and Estes (1996) highlight the general value of task analysis: "Prior to task analysis, job training was accomplished almost exclusively by observational learning on-the-job ("sit by Nelly") and formal apprenticeships. Both these methods required a great deal of time and produced variable results for a couple of reasons. First, the role model did not always know what behaviors to highlight for the learner, for reasons discussed later. Second, some very critical steps or decisions occur very rarely and so are inefficient to observe in real-time."
They later indicate why they believe the role model (and, in some cases, the instructor, instructional designer, or researcher) does not "know what behaviors to highlight for the learner": "While experts often possess an abundance of declarative knowledge about their specialty, the vast majority of their knowledge lies in their automated procedural knowledge."
Evidence that CTA can used to improve instruction
- "when the mental models used by experts can be elicited and represented by CTA, there is good evidence that it can be captured and taught to others, and that even a skilled performer can improve with an expert model (Staszewski, 1988)." (Clark and Estes, 1996)
- Biederman & Shiffrar's (1987) demonstration of bringing novices to near expert performance with a short instructional activity (akin to feature focusing based on a deep analysis of the cognitive (and perceptual) task experts perform when determining the gender of day-old chicks.
- From Clark, R. E., Feldon, D., van Merriënboer, J., Yates, K., & Early, S. (2007):
"Several studies provide direct evidence for the efficacy of CTA-based instruction. In a study of medical school surgical instruction, an expert surgeon taught a procedure (central venous catheter placement and insertion) to first-year medical interns in a lecture/demonstration/practice sequence (Maupin, 2003; Velmahos et al., 2004). The treatment group’s lecture was generated through a CTA of two experts in the procedure. The control group’s lecture consisted of the expert instructor’s explanation as a free recall, which is the traditional instructional practice in medical schools. Both conditions allotted equal time for questions, practice, and access to equipment. The students in each condition completed a written posttest and performed the procedure on multiple human patients during their internships. Students in the CTA condition showed significantly greater gains from pretest to posttest than those in the control condition. They also outperformed the control group when using the procedure on patients in every measure of performance, including an observational checklist of steps in the procedure, number of needle insertion attempts needed to insert the catheter into patients veins, frequency of required assistance from the attending physician, and time-to completion for the procedure.
Similarly, Schaafstal et al. (2000) compared the effectiveness of a pre-existing training course in radar system troubleshooting with a new version generated from cognitive task analyses. Participants in both versions of the course earned equivalent scores on knowledge pretests. However, after instruction, students in the CTA-based course solved more than twice as many malfunctions, in less time, as those in the traditional instruction group. In all subsequent implementations of the CTA-based training design, the performance of every student cohort replicated or exceeded the performance advantage over the scores of the original control group.
Merrill (2002) compared CTA-based direct instruction with a discovery learning (minimal guidance) format and a traditional direct instruction format in spreadsheet use. The CTA condition provided direct instruction based on strategies elicited from a spreadsheet expert. The discovery learning format provided authentic problems to be solved and made an instructor available to answer questions initiated by the learners. The traditional direct instruction format provided explicit information on skills and concepts and guided demonstrations taken from a commercially available spreadsheet training course. Scores on the posttest problems favored the CTA-based instruction group (89% vs. 64% for guided demonstration vs. 34% for the discovery condition). Further, the average times-to-completion also favored the CTA group. Participants in the discovery condition required more than the allotted 60 minutes. The guided demonstration participants completed the problems in an average of 49 minutes, whereas the participants in the CTA-based condition required an average of only 29 minutes.
Generalizability of CTA-based training benefits. Lee (2004) conducted a meta- analysis to determine how generalizable CTA methods are for improving training outcomes across a broad spectrum of disciplines. A search of the literature in 10 major academic databases (Dissertation Abstracts International, Article First, ERIC, ED Index, APA/PsycInfo, Applied Science Technology, INSPEC, CTA Resource, IEEE, Elsevier/AP/Science Direct), using keywords such as “cognitive task analysis,” knowledge elicitation,” and “task analysis,” yielded 318 studies. Seven studies qualified, based on the qualifications of: Training based on CTA methods with an analyst, conducted between 1985 and 2003, and reported pre and post test measures of training performance. A total of 39 comparisons of mean effect size for pre- and posttest differences were computed from the seven studies. Analysis of the studies found effect sizes between .91 and 2.45, which are considered to be large (Cohen, 1992). The mean effect size was d=+1.72, and the overall percentage of post-training performance gain was 75.2%. Results of a chi-square test of independence on the outcome measures of the pre- and posttests (χ2 = 6.50, p < 0.01) indicated that CTA most likely contributed to the performance gain."
References
- Biederman, I., & Shiffrar, M. M. (1987). Sexing day-old chicks: A case study and expert systems analysis of a difficult perceptual learning task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 13(4). pp. 640-645.
- Clark, R. E. & Estes, F. (1996). Cognitive task analysis. International Journal of Educational Research. 25(5). 403-417.
- Clark, R. E., Feldon, D., van Merriënboer, J., Yates, K., & Early, S. (2007). Cognitive task analysis. In J. M. Spector, M. D. Merrill, J. J. G. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 577–593). Mahwah, NJ: Lawrence Erlbaum Associates.
- Feldon (2007). The Implications of Research on Expertise for Curriculum and Pedagogy. Educ Psychol Rev, 19:91.
- Glaser, R., Lesgold, A., Lajoie, S., Eastman, R., Greenberg, L., Logan, D., Magone, M., Weiner, A., Wolf, R., Yengo, L. (1985). Cognitive task analysis to enhance technical skills training and assessment. (Final Report to the Air Force Human Resources Laboratory on Contract No. F41689-8v3-C-0029.) Pittsburgh, PA: Learning Research and Development Center, University of Pittsburgh.
- Lee, R. L. (2003). Cognitive task analysis: A meta-analysis of comparative studies. Unpublished doctoral dissertation, University of Southern California, Los Angeles, California.
- Redding, R.E. (1989). Perspectives on cognitive task analysis: The state of the state of the art. Proceedings of the Human Factors Society 33rd Annual Meeting.
- Staszewski, J.J. (1988). Skilled memory and expert mental calculation. In M.T.H. Chi, R. Glaser, and M.J. Farr (Eds.), The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum.