Hausmann Study2

From LearnLab
Revision as of 14:06, 8 March 2007 by Bobhaus (talk | contribs) (Explanation)
Jump to: navigation, search

The Effects of Interaction on Robust Learning

Robert Hausmann and Kurt VanLehn

Abstract

It is widely assumed that an interactive learning resource is more effective in producing learning gains than non-interactive sources. It turns out, however, that this assumption may not be completely accurate. For instance, research on human tutoring suggests that human tutoring (i.e., interactive) is just as effective as reading a textbook (i.e., non-interactive) under very particular circumstances (VanLehn et al., 2007). This rises the question, under which conditions should we expect to observe strong learning gains from interactive learning situations?

The current project seeks to address this question by contrasting interactive learning (i.e., jointly constructing explanations) with non-interactive learning (i.e., individually constructing explanations). Students were prompted to either self-explain in the singleton condition or to jointly construct explanations in the dyad condition.

Background and Significance

Several studies on collaborative learning have shown that it is more effective in producing learning gains than learning the same material alone. This finding has been replicated in many different configurations of students and across several different domains. Once the effect was established, the field moved into a more interesting phase, which was to accurately describe the interactions themselves and their impact on student learning (Dillenbourg, 1999). One of the hot topics in collaborative research is on the "co-construction" of new knowledge. Co-construction has been defined in many different ways. Therefore, the present study limits the scope of co-constructed ideas to jointly constructed explanations.

Evidence supporting jointly constructed explanations is sparse, but can be found in a study by McGregor and Chi (2002). They found that collaborative peers are able to not only jointly constructed ideas, but they will also reuse the ideas in a later problem-solving session. One of the limitations of their study was that it did not measure the impact of jointly constructed ideas on robust learning. In a related study, Hausmann, Chi, and Roy (2004) found correlational evidence for learning from co-construction. To provide more stringent evidence for the impact of jointly constructed explanations, the present study will manipulate the types of conversations dyads have by prompting for jointly constructed explanations and measuring the effect on robust learning.

Glossary

See Hausmann_Study2 Glossary

Research question

How is robust learning affected by self-explanation vs. jointly constructed explanations?

Independent variables

Only one independent variable was used:

  • Explanation-construction: individually constructed explanations vs. jointly constructed explanations

Prompting for an explanation was intended to increase the probability that the individual or dyad will traverse a useful learning-event path.

Hypothesis

The Interactive Hypothesis: collaborative peers will learn more than the individual learners because they benefit from the process of negotiating meaning with a peer, of appropriating part of the peers’ perspective, of building and maintaining common ground, and of articulating their knowledge and clarifying it when the peer misunderstands. In terms of the Interactive Communication cluster, the hypothesis states that, even when controlling for the amount of knowledge components covered, the dyads will learn more than the individuals.

Dependent variables

  • Near transfer, immediate: electrodynamics problems solved in Andes during the laboratory period.

Results

In vitro Experiment

We conducted an in vitro experiment during the Spring 2007 semester. Undergraduate volunteers, who were enrolled in the second semester of physics at the University of Pittsburgh, composed the same for the study. Unfortunately, the sample size was small because our pool of participants was extremely limited.

As in our first experiment, we used normalize assistance scores. Normalize assistance scores were defined as the sum of all the errors and requests for help on that problem divided by the number of entries made in solving that problem. Thus, lower assistance scores indicate that the student derived a solution while making fewer mistakes and getting less help, and thus demonstrating better performance and understanding.

The results from the laboratory study were as follows:

  • Differences between conditions

The jointly constructed explanation (JCE) condition (M = .45, SD = .14) demonstrated lower assistance scores than the individually constructed explanation (ICE) condition (M = 1.00, SD = .15). The difference between experimental conditions was statistically reliable and of high practical significance, F(1, 23) = 7.33, p = .01, ηp2 = .24.

  • Problem by condition

The pattern observed at the level of conditions replicated at the level of problem. That is, when the problem was used as a repeated factor in a multivariate analysis of variance (MANOVA), the JCE condition demonstrated lower normalized assistance scores for all of the problems, except for the first, warm-up problem (see Table).


ICE
(n = 9)
JCE
(n = 14)
p ηp2
Prob1 0.75 0.63 .483 .024
Prob2 1.09 0.32 .003 .341
Prob3 1.08 0.51 .059 .160
Prob4 0.67 0.29 .034 .196


In addition to providing higher quality solutions, the jointly constructed explanation condition (M = 985.71, SD = 45.60) also solved their problems more quickly than the individually constructed explanation condition (M = 1097.75, SD = 51.45). Although the omnibus difference between experimental conditions was not statistically reliable, F(1, 23) = 2.66, p = .12, ηp2 = .10, the differences in solution times for the second and third problem were reliably lower for the JCE condition. This finding is particularly interesting because the experiment was capped at two hours; therefore, the dyads were able to complete the problem set more often than the individuals. However, this result only approached significance, χ2 = 22.91, p = .15.

  • Knowledge component (KC) by condition

Because not all of the individuals were able to complete the entire problem set, their data could not be included in an analysis of the knowledge components. A MANOVA assumes that each individual participates in all of the measures. However, this is not the case when the individuals did not complete the last problem. Therefore, this fine-grained analysis of learning will need to wait until the study can be replicated in the classroom, with a larger sample size.

Explanation

This study is part of the Interactive Communication cluster, and it hypothesizes that the prompting singletons and dyads should increase the probability that they traverse useful learning events. However, it is unclear if the act of communicating with a partner should increase learning, if the type of statements (i.e., explanations) are held constant. A strong-sense version of the Interactive Communication hypothesis would suggest that interacting with a peer is beneficial for learning because dyads can learn from their partner by assimilating knowledge components articulated by the partner or via corrective comments that help to refine vague or incorrect knowledge components.

Both self-explaining and joint-explaining should lead to deeper knowledge construction monologs/dialogs because they are likely to include integrative statements that connect information with prior knowledge, connect information with or previously stated material, or infer new knowledge. However, it may be more likely that deeper knowledge construction occurs during dialog because the communicative partner provides a social cue to avoid glossing over the material.

Annotated bibliography

  • Presented at a PSLC lunch: June 12, 2006 [1]

References

  1. Dillenbourg, P. (1999). What do you mean "collaborative learning"? In P. Dillenbourg (Ed.), Collaborative learning: Cognitive and computational approaches (pp. 1-19). Oxford: Elsevier. [2]
  2. Hausmann, R. G. M., & Chi, M. T. H. (2002). Can a computer interface support self-explaining? Cognitive Technology, 7(1), 4-14. [3]
  3. Hausmann, R. G. M., Chi, M. T. H., & Roy, M. (2004). Learning from collaborative problem solving: An analysis of three hypothesized mechanisms. In K. D. Forbus, D. Gentner & T. Regier (Eds.), 26nd Annual Conference of the Cognitive Science Society (pp. 547-552). Mahwah, NJ: Lawrence Erlbaum. [4]
  4. McGregor, M., & Chi, M. T. H. (2002). Collaborative interactions: The process of joint production and individual reuse of novel ideas. In W. D. Gray & C. D. Schunn (Eds.), 24nd Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawerence Erlbaum. [5]
  5. VanLehn, K., Graesser, A. C., Jackson, G. T., Jordan, P., Olney, A., & Rose, C. P. (2007). When are tutorial dialogues more effective than reading? Cognitive Science. 31(1), 3-62. [6]

Connections

<insert here>