2025 applications are Open
Application Deadline: May 1, 2025
Held in person in Pittsburgh, PA
at Carnegie Mellon University
Carnegie Mellon University Students Show Their Scotty Pride!
- When: Monday, July 28, 2025 – Friday, August 1, 2025
- In person: 9 AM to 5 PM EDT in person – no remote option
- Where: Carnegie Mellon University
- Cost: $950 Professional Fee / $500 for Graduate Students with following exceptions:
- Computer Science Education Research (CER) track accepted applicants attend free (excluding travel, room & board) courtesy of the NSF SPLICE program.
- Some graduate student scholarships available for full-time graduate students as well as participants focused on computer science education (see application).
- Apply: Complete this Application
- Contact: LearnLab Help – email
- Generative AI: Attendees will receive guided instruction and hands-on experience applying and leveraging generative AI in their prototypes and experiments. Emphasis will be on applying prompt engineering techniques and model modifications to achieve desired results. No prior programming experience is necessary.
- Background Readings can be found here
- Important Dates:
- The deadline for applications is noon (12 PM) EST May 1, 2025.
- Admission decisions will be made by June 9, 2025.
The LearnLab Summer School is an intensive 1-week course focused on creating technology-enhanced learning experiments and building intelligent tutoring systems. The summer school will provide you with a conceptual background and considerable hands-on experience in designing, setting up, and running technology-enhanced learning experiments, as well as analyzing the data from those experiments in a technology-supported manner. Programming experience is not a pre-requisite for attending.
The summer school lasts five days evenly split between lectures and hands-on activities. Each day includes lectures, discussion sessions, and laboratory sessions where the participants work on developing a small prototype experiment in an area of math, science, or language learning. The participants use state-of-the-art tools including but not limited to the Open Learning Initiative (OLI) development environment, Cognitive Tutor Authoring Tools and other tools for course development, tools for authoring natural language dialog, TagHelper tools for semi-automated coding of verbal data, and DataShop for storage of student interaction data and analysis of student knowledge and performance.
On the last day, student teams present their accomplishments to the rest of the participants, Participants are expected to do some preparation before the summer school’s starts.
The summer school is organized into six parallel tracks: Building online courses with OLI (BOLI), Chemistry Education (CE), Computational Models of Learning (CML), Intelligent Tutor Systems development (ITS), Educational Data Mining (EDM), and (new as of 2023) Computer Science Education Research (CER). We are particularly encouraging participants that are addressing inequities in education and participants interested in computer science education.
The tracks will overlap somewhat but will differ significantly with respect to the hands-on activities, which make up about half the summer school. Although as a participant you will be assigned to one of the tracks, based on your preferences stated in the application, it will be possible to “shop around” – that is, participate in activities of tracks other than the one to which you have been “officially” assigned. Our primary concern is that the summer school will be a good learning experience for you.
The summer school involves intensive mentoring by LearnLab researchers, which starts by e-mail before the summer school (in order to select a subject domain and task for the project, where appropriate) and continues during the summer school with a good amount of one-on-one time during the hands-on sessions. The mentors are assigned based on your interests as stated in the application. (All participants will have the opportunity to interact with all course instructors, but will interact more frequently with their designated mentor.)
The following researchers are expected to function as mentors and instructors:
Ken Koedinger
Vincent Aleven
Peter Brusilovsky
David Yaron
Mark Blaser
Thomas Price
Norman Bier
Erin Czerwinski
John Stamper
Erik Harpstead
Carolyn Rose
among others
The Six Tracks
Computer Science Education Research (CER): In this track, you will learn how to create, improve and evaluate interactive learning content, instructional materials, learning technologies, and analytics for computer science education. You will learn how this design process is informed by existing CER research, and to use your results to further understand how people learn computer science, as well as how to improve their learning. CER is an interdisciplinary field that draws on education research, computer science, psychology, and other related fields to investigate questions related to computer science learning and teaching. For more on CER, see Dr. Amy Ko’s CER FAQ.
We will explore how students learn computer science concepts and skills; explain the impact of different pedagogical approaches on students’ learning and engagement; demonstrate how instrumenting a CS course or module for data driven analysis enables educational data mining to be used as a tool to improve learning; as well as cover methods to reduce barriers to participation and increase diversity.
You might engage in the following:
- Design a novel system or intervention, grounded in educational theory, to improve learning in a CS classroom or informal learning environment.
- Create and evaluate a data-driven algorithm to automate some part of CS instruction (e.g., feedback, or problem selection).
- Design a study to rigorously evaluate an intervention in a CS classroom, using data to better understand how students used and benefited from the intervention.
- Instrument an existing CS learning environment and design analytics to better understand student learning in that environment.
- Apply learning analytic approaches to existing datasets from CS classrooms to answer a research question about how students learn.
Building online courses with OLI – OLI Track: In the OLI (Open Learning Initiative) track, you will focus on elements of effective course design including the connection between learning objectives and learning outcomes. Participants will identify a course module that they would like to create and the expected learning outcomes. Over the course of the week, you will 1) refine your learning outcomes toward making them precise and measurable and 2) develop content, activities, and assessments to support these outcomes. If time permits, you may also develop a plan for completing additional course modules. The modules you create can be used in live classrooms via the OLI platform and improved using data from learner interactions over time. Participants will create OLI courseware and be able to continue to use OLI tools and techniques after the summer session concludes.
This track will offer a two-tiered approach, introducing you to both the underlying pedagogical approach and design philosophy that supports OLI learning experiences and guiding you in the use of the tools and technologies that constitute the OLI platform. Carnegie Mellon’s Open Learning Initiative (OLI) develops online learning environments that integrate research and practice to deliver effective learning experiences while advancing our understanding of how humans learn. OLI technologies combine a standard set of learning activities with the ability to integrate additional education technologies; an OLI course could integrate technologies and approaches from the other Summer School tracks (i.e. incorporating a tutor or collaborative learning experience into a larger learning environment, or using EDM techniques to analyze data from your OLI course).
Computational Models of Learning (CML) Track: A key focus of this track is on how the study of machine learning and human learning go hand in hand. Computational Models of Learning (CML) simulate how students’ knowledge representations change in response to individual learning opportunities. While Educational Data Mining (EDM) takes a data-driven approach to analyzing how student performance evolves with practice, CML takes a theory-first approach to modeling learning through the development of machine-learning algorithms that simulate how particular learning experiences produce changes in students’ internal knowledge representations. By simulating learning as a developmental process, CML-based student simulations produce mistakes and correct responses just like real students as they learn directly from online educational materials like ITSs. Attendees will learn about recently developed Computational Models of Learning, how their underlying theories are tested against student data, and how CML differs from other forms of student modeling and simulation. Attendees will have the opportunity to use computational models of learning for several practical purposes including: 1) As simulated learners that generate synthetic student data, 2) As authoring tools that are taught interactively instead of programmed, 3) As tools for generating explainable knowledge representations for analyzing expert-knowledge and students’ knowledge as it evolves during learning. Toward helping attendees simulate or support learning in their domains of interest, CML track attendees have often learned some of the basics of the ITS and EDM tracks (be prepared to learn a lot).
Chemistry Education (CE) Track: The Chemistry Education track investigates how students learn chemistry using REAL Chem courseware, which provides a full year of instructional materials with millions of data records collected from a range of two- and four-year higher education institutions. Unlike traditional pre- and post-tests, this data spans entire semesters, offering a rich, continuous view of student learning and allowing us to uncover patterns across topics. (See, for example, DOI: 10.26434/chemrxiv-2024-bms6x).
This track offers a hands-on introduction to chemistry education research. You’ll use REAL Chem materials and datasets to design your project, gaining experience with LearnSphere’s DataShop, a platform for secure data analysis that reveals insights like learning curves and performance trends. Additionally, you’ll explore Torus, an adaptive learning platform for modifying and creating instructional materials tailored to your research.
By the end of the week, you’ll identify a research question and develop a study design aligned with your goals. Topics might include problem-solving strategies, conceptual understanding, or molecular visualization. Whether you’re a chemistry instructor starting research or a learning scientist exploring chemistry education, this track provides tools, support, and inspiration to advance teaching and research.
ITS (Intelligent Tutoring System) Track: In the intelligent tutor system development track, your goal will be to implement a prototype computer-based tutor, using authoring tools developed by LearnLab researchers, such as CTAT (the Cognitive Tutor Authoring Tools) which supports the creation of intelligent tutoring systems. CTAT has been designed for non-programmers. You will be able to use these tools even if you have no programming experience. Depending on your interest, your tutor might be related to a planned or possible experiment (perhaps an in vivo experiment), or it might be related to a tutor development project that you are involved in or are planning to start-up or a course that you are teaching. CTAT-built tutors typically focus on multi-step problem solving as is often found in math, physics, or chemistry, but they are also being applied with increasing success and frequency to language learning, where the exercises presented to students often have smaller granularity. During the week, you will start out by doing some cognitive task analysis to understand the nature of the problems for which your tutor will provide tutoring. Then, depending on your interest, you will use one or more of the tools described above to implement a computer-based tutor. By the end of the week, you will have a prototype running. In fact, if you decide to focus on intelligent tutoring systems development, you will already have implemented some intelligent tutor behavior by the end of day 2 (an Example-Tracing Tutor).