October 14, 2013
Research on Cognitive Task Analysis: Capturing expertise for instruction
|
|
What have we recently learned about expertise from attempts to analyze the way experts perform tasks and solve problems? Dick Clark will describe the results of a number of experiments where cognitive task analysis (CTA) was applied in professional fields such as health care, software design and engineering. Among the findings to be discussed are evidence that approximately 70% of expert decisions are automated and non-conscious, the results of using CTA to identify additional expert decisions beyond the 30% usually captured for instruction and the impact of using of CTA-based information for the design of instruction. Dick will also describe some of the problems with the design of CTA studies and the need for future research including the use of data mining strategies.
October 16, 2012
Using Data Mining and MultiDimensional Item Response Theory to analyze an MITx MOOC.
|
|
Prof. Dave Pritchard, Cecil and Ida Green Prof. of Physics at MIT Education group site is http://RELATE.MIT.edu
~8000 students completed the 6.002x Massive Open Online Course (MOOC) in the Spring 2012 – a course with videos, a wiki, a standard textbook, discussion fora, and both embedded and collected problems, some requiring use of a circuit simulator. We investigate the patterns of student attrition, resource use, and behavior on homework and during exams, seeking evidence for behaviors that correlate with skill and/or learning. Multi-dimensional Item Response Theory, used to analyze student responses to questions, revealed ~ 20 significant factors with distinct student skills and related question discrimination. Students doing the midterm actually referred back mostly to questions with discrimination patterns similar to the midterm questions, showing that students categorize question similarity using these factors. We showed that questions requiring multiple attempts are a rich source of additional assessment information.
August 30, 2012
Adaptive help giving for physics homework problems
Brett van de Sande
Arizona State University
Abstract: Both human and omputer tutors constantly make decisions about what kind of help they are going to give (or not give). Ideally, they make these decisions based on some determination of how the student is progressing coupled with some knowledge of what kind of tutoring strategies have worked in similar situations in the past. With this in mind, we have implemented a method for iteratively improving the help-giving policies of a computer tutor for introductory physics. We created a version of the tutor that randomly uses one of several (reasonable) policies when helping students and then deployed it in the classroom. Next, we used the resulting student log data and machine learning techniques to train a new version of the tutor which has improved hinting policies. Finally, we deployed the new version of the tutor in the classroom.
Bio: Brett Van de Sande is an Assistant Research Professional and Computer Science and Engineering Faculty at Arizona State University. Dr. van de Sande post-doctoral work includes theoretical physics, 1994-1999. He taught physics and math at Geneva College, 1999-2004. He conducted research in physics education and artificial intelligence, University of Pittsburgh, 2005-2008. He joined ASU in 2008.
July 9, 2012
Putting Research Into Practice
|
|
As an independent company and, now, as a part of the Apollo Group, Carnegie Learning has put an emphasis on active participation in research as an approach to improving educational outcomes. Balancing customer requests, sales needs and development priorities is difficult, and the path to commercialization does not always go as expected. In this talk, we'll discuss the challenges and the promise of translating university research into product improvements that have a significant impact on student learning. We'll provide examples of successful and unsuccessful commercialization and talk about plans for improving the process.
June 25, 2012
Marmoset: Automated Grading and Data Collection for CS Education
|
|
Marmoset is an integrated submission, grading and data collection system for programming courses. In addition to automating the grading of programming assignments, Marmoset provides tools that collect snapshots of students' files every time they save. This data is extremely fine-grained, with around 70% of the snapshots collected in CS-2 only changing four lines of code or fewer. Marmoset has been in use at the University of Maryland since the Fall 2004 semester, and is currently used by a dozen courses and over 700 students at Maryland each semester.
Marmoset also supports "release testing", a novel pedagogical innovation the rewards students who begin work early. The grading tests are divided into public tests (given to the students with the project) and release tests (kept on the server). When students upload their code to the server, they can spend a "release token", which reveals the number of release tests passed and failed, and additional information about only the first 2 failed release tests. Students cannot learn anything about the other failed release tests until they fix their code and spend another token. Furthermore, they only have 3 tokens that regenerate every 24 hours, so procrastination now has the cost of "lost" tokens.
Bio:
Jamie Spacco received a PhD from the University of Maryland at College Park in 2006 with the dissertation focused on the Marmoset project that I'll be talking about. He taught for 4 years at Colgate University in Hamilton, NY as a visiting professor (a very long "1 year leave replacements") until starting a tenure-track job at Knox College in Galesburg, IL in 2010. While at Colgate Jamie mostly worked on software engineering research; since arriving at Knox I've started working on CS Education once again.
June 11, 2012
Automated Student Model Improvement
|
|
Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational technology data sets from intelligent tutors to games in a variety of domains from math to second language learning. In at least ten of the eleven cases, the method discovers improved models based on better test-set prediction in cross validation. The improvements isolate flaws in the original student models, and we show how focused investigation of flawed parts of models leads to new insights into the student learning process and suggests specific improvements for tutor design. We also discuss the great potential for future work that substitutes alternative statistical models of learning from the EDM literature or alternative model search algorithms.
December 13, 2010
Analytic representations: the design of tools to create and exploit them
|
|
My dissertation focussed on computer support for "human analysis" (as opposed to the semi-automated or computer assisted analysis). As a result I created a tool called Tatiana ( http://code.google.com/p/tatiana ) which has interesting properties for researchers who collect and analyse various kinds of process data (videos, computer logs, transcripts, etc.). Tatiana is built on a framework for constructing and managing analytic representations using four quasi-orthogonal operation types: transformation, enrichment, visualisation and synchronisation.
This talk will describe Tatiana and the underlying framework:
- What Tatiana can currently do and what features are planned for the future.
- How Tatiana may be able to help solve research problems when analyzing videos, computer logs, transcripts, etc.
- Situations in which it might be beneficial to extend Tatiana, rather than constructing new software from scratch.
About Gregory Dyke: I am interested in the creation of tools to help humans analyse data of computer mediated collaboration (and learning). My PhD resulted in the creation of Tatiana (Trace Analysis Tool for Interaction ANAlysts), a flexible, extensible tool particularly well suited for the analysis of small group face to face and computer mediated interaction. My current work involves examining and assisting the discovery of how interaction unfolds over time.
December 10, 2010
Using DataShop Tools to Model Students Learning Statistics - A LearnLab DataShop Case Study
|
|
Marsha Lovett describes using the LearnLab DataShop to improve the Carnegie Mellon Statistics course.
August 30, 2010
Physical Symbols for Powerful Reasoning
|
Photo and link to the seminar
|
One way that cultures advance the intelligence of their members is through the symbolic forms (e.g., language, numbers) they promulgate through formal and informal education. Algebra is a prime example of an external symbolic system that, once learned, greatly enhances human intelligence. This enhancement is reflected in better performance in more complex problem solving even though it may inhibit performance for simpler problem solving (Koedinger, Alibali, & Nathan, 2008). Learning to effectively use this external representational tool is not easy even discounting the time needed to acquire adequate background knowledge, it takes most students a school year or two to learn algebra. In other words, many changes in /internal/ cognition are required before effective use of this /external/ representation is possible. I will discuss our experiments on algebra learning with real and simulated students (e.g., Matsuda, Cohen, Sewall, Lacerda, & Koedinger, 2007) and emphasize the productive interplay between internal and external cognition. I will explore whether there is a human-algebra distributed system that has learning properties beyond the human system.
July 25, 2010
KDD Cup Workshop
| Photo and link to the seminar view seminar |
2010 KDD Cup Workshop in Washington, DC.
July 21, 2010
Cognitive Science 2010 Plenary Talk
| Photo and link to the seminar view seminar |
Marsha will talk about knowledge component (KC) modeling in the context of the OLI-Statistics course, showing a new tool that helps instructors track their students' progress based on the models, and then describing some results from a series of studies showing accelerated learning in the OLI-Statistics course when instructors use this tool.
June 14, 2010
Thinking with your Hands
| Photo and link to the seminar view seminar |
I will demonstrate some of these possibilities by discussing a study with interactive fraction representations we conducted this spring with 312 4th and 5th-grade students in 13 classes. Students working with interactive fraction representations came away having learned more than students working with static representations and traditional inputs.
May 12, 2010
Engagement, Learning, and Assessment in Immersive Environments
| Photo and link to the seminar view seminar |
May 10, 2010
Adventures in Researching Self-Regulated Learning
| Photo and link to the seminar view seminar |
Bio: Philip H. Winne is a professor of educational psychology and Canada Research Chair in Self-Regulated Learning and Learning Technologies atSimon Fraser University. Winne has made significant contributions to research on self-regulated learning. He is the principal investigator of theLearning Kit Project, which has developed educational software founded on principles of self-regulated learning. Before earning a PhD from Stanford University in 1976, Winne received undergraduate and masters degrees from Bucknell University. He has served as co-editor of the Educational Psychologist and associate editor of the British Journal of Educational Psychology. Winne has authored (or co-authored) over 70 peer-reviewed journal articles, over 30 book chapters, and 5 books including an introductory textbook on educational psychology that is widely used in Canada (Woolfolk, Winne, & Perry, 2006). Phil on the Web: http://www.educ.sfu.ca/research/winne/