Decimal Point: A Retrospective on Experiments with a Digital Learning Game
Bruce McLaren, Carnegie Mellon University
September 19, 2022
Abstract: The McLearn Lab at Carnegie Mellon University has developed the Decimal Point learning game and has run a series of classroom studies over a period of the past 8 years. In these studies, we have explored a variety of game-based learning and Learning Science issues, such as whether the game leads to better learning than a more traditional learning approach; whether giving students more agency leads to more learning and enjoyment; whether students benefit from hints and error messages; and what types of prompted self-explanation lead to the best learning and enjoyment outcomes. In this talk I will discuss how we designed the game, the technology it is built on, and the key results of the various experiments we’ve conducted. I will conclude by talking about the important take-aways from our studies, as well as what we have learned about using a digital learning game as a research platform.
Designing Culturally-relevant Educational Technology at a Global Scale
Amy Ogan, Carnegie Mellon University
May 23, 2022
Abstract: With increased access to digital technologies worldwide and across socioeconomic groups, the world is poised for a revolution in teaching and learning practices. The COVID-19 pandemic has enabled new opportunities for some learners but starkly reminds us that these resources are still not equally distributed. New technologies can foster equal opportunities for education and quality of life, or amplify social divisions. In this talk we will discuss the design of next-generation learning technologies for a global audience, in which educational experiences are built with a computer-in-the-loop approach, thus enabling us to push the boundaries of the application of technology to learning. In this way we can use technology to address the humanity and context of the learner and the ecology in which they learn.
A Question of Opportunities
John Stamper, Carnegie Mellon University
April 18, 2022
Abstract: In order to provide advanced knowledge tracing and ultimately adaptive learning in online courses, students need sufficient opportunities to show their knowledge in the form of answering questions. Previous work on adding adaptivity to existing online courses has shown many courses require significantly more opportunities/questions than are currently available. In this talk I will cover some recent work on improving the generation of questions for online courses with Human-AI approaches applied to data from Carnegie Mellon’s OLI system. Our work uses both automatic generation using NLP applied to course materials as well as learnersourcing with students in the courses. I will discuss the current state of the art in both areas and the current challenges we are overcoming.
An Astonishing Regularity in Student Learning Rate
Ken Koedinger, Carnegie Mellon University
March 21, 2022
Abstract: Leveraging a scientific infrastructure for exploring how people learn academic skills, we have developed cognitive and statistical models of skill acquisition and used them to understand fundamental similarities and differences across learners. Our primary question was why do some students learn faster than others? Or do they? We model data from student performance on groups of tasks that assess the same skill component and that provide follow-up instruction on student errors. Our models estimate, for both students and skills, initial correctness and learning rate, that is, the increase in correctness after each practice opportunity. We applied our models to 1.3 million observations across 27 datasets of student interactions with online practice systems in the context of elementary to college courses in math, science, and language. Despite the availability of up-front verbal instruction, like lectures and readings, students demonstrate modest initial performance at the start of practice, at about 67% accuracy. Despite being in the same course, students’ initial performance varies substantially from about 55% correct for those in the lower half to 75% for those in the upper half. In contrast, and much to our surprise, we found students to be astonishingly similar in estimated learning rate (about 0.1 log odds in error reduction per practice opportunity). We suggest this similarity is dependent on the favorable conditions for deliberate practice provided by interactive online learning systems. These findings pose new challenges for theories of learning to explain both large initial performance variation and astonishing regularity in student learning rate.
Bio: Ken Koedinger is a professor of Human Computer Interaction and Psychology at Carnegie Mellon University. Dr. Koedinger has an M.S. in Computer Science, a Ph.D. in Cognitive Psychology, and experience teaching in an urban high school. His multidisciplinary background supports his research goals of understanding human learning and creating educational technologies that increase student achievement. His research has contributed new principles and techniques for the design of educational software and has produced basic cognitive science research results on the nature of student thinking and learning. Koedinger directs LearnLab, which started with 10 years of National Science Foundation funding and is now the scientific arm of CMU’s Simon Initiative. LearnLab builds on the past success of Cognitive Tutors, an approach to online personalized tutoring that is in use in thousands of schools and has been repeatedly demonstrated to increase student achievement, for example, doubling what algebra students learn in a school year. He was a co-founder of CarnegieLearning, Inc. that has brought Cognitive Tutor based courses to millions of students since it was formed in 1998, and leads LearnLab, now the scientific arm of CMU’s Simon Initiative. Dr. Koedinger has authored over 250 peer-reviewed publications and has been a project investigator on over 45 grants. In 2017, he received the Hillman Professorship of Computer Science and in 2018, he was recognized as a fellow of Cognitive Science.
Postsecondary Adoption, Use, and Improvement of Open, Adaptive Learning Technologies
Lauren Herckis, Carnegie Mellon University
Monday, February 21, 2022
Abstract: Innovations in adaptive learning technologies and learning analytics promise to put powerful tools into educators’ hands, personalize learning experiences, and improve learning outcomes for students. Edtech adoption has accelerated during the global pandemic, but adoption practices often fail to match the expectations of researchers or the aspirations of developers. If these technologies stand poised to revolutionize education, why aren’t they being rapidly and widely adopted? This work takes an implementation science approach to identifying the challenges and opportunities associated with the adoption and effective use of open, adaptive courseware that employs analytic tools providing faculty, student, and crowdsourced feedback and participation. I identify several high-level challenges that serve as barriers to effective adoption and prevent both educators and learners from using edtech as intended. These include mismatched expectations between novice and expert users and underestimation of resources required to enable adoption. Using a diffusion of innovations analytical framework, I examine how these challenges relate to one another, build an understanding of the factors that affect adoption and integration of open, adaptive courseware, and make recommendations to improve uptake and sustained implementation.
Bio: Lauren Herckis is an anthropologist at Carnegie Mellon University with a faculty appointment in the University Libraries and a courtesy appointment in the Human-Computer Interaction Institute. Before joining Carnegie Mellon’s Simon Initiative as a postdoctoral research associate, she worked as part of the Center for Health Equity Research and Promotion (CHERP) at the U.S. Department of Veterans Affairs and earned her PhD in Anthropology from the University of Pittsburgh. Her current research on technology adoption, supported by the California Education Learning Labs and the Hillman Family Foundations, explores the intersection of campus culture, technological change, and effective teaching. Recent projects have examined the barriers to effective use of evidence-based tools and practices, explored the ways that faculty identity shapes selection of teaching strategies, and produced protocols which help faculty employ effective technology-enhanced learning tools with fidelity. Past projects have been supported by the National Science Foundation, Fulbright Institute of International Education, Carnegie Corporation of New York, and a POD Early Career Researcher Award.
Systems for Learning in a Computationally-Mediated Future of Work
Chinmay Kulkarni, Carnegie Mellon University
Abstract: Enabled by the internet, and accelerated by the pandemic, the future of work is already here. Workers collaborate with distant colleagues they have never met in person, employers rely on the matching algorithms of online labor platforms to find and connect with freelancers around the world, and even seemingly “offline” businesses are increasingly computationally-mediated, for example, relying on social media platforms to connect with their customers. However, computationally-mediated work today largely lacks tools that support learning: without a supervisor, freelancers struggle to identify which skills will help them grow, and without face-to-face interactions, online collaborators often fail to learn how to work together effectively.
Based on my research that has resulted in tools that have helped millions of people learn in massive online classes (MOOCs), I argue for a new approach for supporting learning in computationally-mediated work. Specifically, I argue for designing systems that use findings
from behavioral sciences to create social interactions that scaffold learning, and then use computational techniques to weave these interactions into the fabric of work and to amplify their benefits. In this talk, I demonstrate this approach with systems that help people identify which skills to learn, collaborate more effectively, and develop more productive behaviors.
Bio: Chinmay Kulkarni is an Associate Professor in Human-Computer Interaction, whose research introduces scalable computer-aided technology for large-scale education and online work. His lab has created systems that have scale feedback and assessment to thousands of learners in massive online classes, systems that extend peer feedback to work contexts where competition may prevent honest feedback, and systems to help people learn tacit skills needed in new
forms of work, such as remote work. These systems have directly helped more than 50,000 learners, and the related research findings have been adopted by companies as varied as Coursera, Mozilla, and Instagram. His lab is also developing community-based and participatory design approaches that can also yield scalable socio-technical solutions while still resisting the impulse to position certain community needs as edge cases. This research is currently supported by the National Science Foundation, the US Department of Education, and the Office of Naval Research. Past research sponsors include Mozilla and Facebook/Instagram. Before coming to Carnegie Mellon, he earned a PhD from Stanford’s Computer Science Department.