https://learnlab.org/wiki/api.php?action=feedcontributions&user=Petrachaney&feedformat=atomLearnLab - User contributions [en]2024-03-29T10:31:30ZUser contributionsMediaWiki 1.31.12https://learnlab.org/wiki/index.php?title=Davy_%26_MacWhinney_-_Spanish_Sentence_Production&diff=12161Davy & MacWhinney - Spanish Sentence Production2011-08-30T13:44:58Z<p>Petrachaney: </p>
<hr />
<div>Spanish Sentence Production<br />
{| border = "1"<br />
|-<br />
! Project Title<br />
| The Development of Speaking Fluency Through an Oral Repetition Task<br />
|-<br />
! Principle Investigator<br />
| Colleen Davy (Carnegie Mellon University)<br />
|-<br />
! Co-Principle Investigator<br />
| Brian MacWhinney (Carnegie Mellon University)<br />
|-<br />
! Study Start and End Dates<br />
| Study 1: Spring 2009<br />
|-<br />
! <br />
| Study 2: Spring 2010<br />
|-<br />
! <br />
| Study 3: Fall 2010<br />
|-<br />
! LearnLab<br />
| N/A<br />
|-<br />
! Number of Participants<br />
| ~25<br />
|-<br />
! Participant Hours<br />
| ~40<br />
|-<br />
! DataShop<br />
| Transcriptions of Studies 1 and 2 not currently uploaded, but available upon request<br />
|-<br />
! Current Status<br />
| Study 3 in progress; will start data collection Fall 2010<br />
<br />
|}<br />
<br />
==Abstract==<br />
The goal of this study is to determine whether and how oral repetition can improve the fluent production of Spanish sentences of various lengths and constructions. We do this by presenting students with spoken Spanish sentences and letting them practice repeating them back. In the pilot study, students heard each sentence three times and immediately repeated it back. We measured the length of the repetition (how long it took them to repeat it back) and recorded the number and types of errors they made. We found that the practice helped students fluently repeat the sentences they heard, in terms of number of errors made and in the time needed to repeat the sentence. <br />
<br />
Current studies train students to practice speaking sentences by describing series of pictures. During training, students see pictures and hear the sentence described by those pictures and are asked to repeat the sentence back. After the initial training phase, students should be able to respond to the pictures without hearing the spoken sentence. Future work will also look at different factors that may make a difference in training, including whether it is better to train on full sentences or on individual phrases.<br />
<br />
==Background and Significance==<br />
<br />
Levelt’s speaking model (1989) says that speaking requires three different stages of processing: conceptualization, formulation, and articulation. In the conceptualization stage, the speaker generates a pre-verbal message, activating the concepts about which they wish to speak. In the formulation stage, activation spreads to the lexical level, the lemmas, which contain the lexical form and all thematic, morphological and syntactic information that goes along with it. Finally, in the formulation stage the phonetic encoding of the lemma creates an articulatory score that the speaker uses to create the motor movements involved in speaking. This multi-modular approach suggests that for second language speakers, there may be three sources of difficulty in speaking: in conceptualizing the message, in retrieving the lemmas and the related morphological and syntactic information, and in controlling the motor movements involved in actually articulating the speech. <br />
<br />
Yoshimura and MacWhinney (2007) implemented an oral repetition task to improve speaking in Japanese learners, having them practice reading aloud Japanese sentences containing between 0 and 3 novel words. They found that reading aloud improved fluent speech in terms of the length of utterance (how long it took them to read the sentence from start to finish) and the number of errors. A pilot study for the current line of research showed that the same pattern of results occurred when students of Spanish instead repeated sentences they heard. In this study, students heard a sentence, repeated it back, then were asked to a) translate the sentence into English and b) rate their speech in terms of fluency. They repeated this four times for each sentence. Further studies in this line of research will attempt to refine this task to achieve the greatest improvements in speech.<br />
<br />
==Glossary==<br />
[http://www.learnlab.org/research/wiki/index.php/Practice Practice]<br />
<br />
[http://www.learnlab.org/research/wiki/index.php/Fluency Fluency]<br />
<br />
[http://www.learnlab.org/research/wiki/index.php/Repetition Repetition]<br />
<br />
==Research Questions==<br />
<br />
1. During an oral repetition task, do students increase fluency in terms of the time it takes them to repeat back the sentence?<br />
<br />
2. Does this task help students increase fluency in terms of the amount of errors they make?<br />
<br />
3. Are students aware of their own speech, to the extent that they can accurately rate their own performance?<br />
<br />
4. Will students be able to transfer their increased fluency to novel sentences?<br />
<br />
== Study One==<br />
<br />
Study one tested whether or not a repetition task could increase fluent production of the sentences. <br />
<br />
9 third and fourth semester Spanish students at CMU participated in this study. They practiced using 40 sentences containing between four and 19 words, and between 9 and 31 syllables. During the practice phase, they heard each sentence four times and immediately repeated it back each time. After each repetition, they translated the sentence into English and rated how fluently they were able to repeat the sentence on a scale of 1 to 7. <br />
<br />
After the practice phase, they moved on to the test phase, where they heard each sentence and repeated it back one time. <br />
<br />
A week later, they came back for a delayed post-test, where they again heard each sentence once and repeated it back. <br />
<br />
===Hypothesis===<br />
<br />
We hypothesized that, to answer research question 1, the amount of time the student took to repeat the sentence would decrease. As to question 2, we predicted that students would produce fewer errors. We also predicted that students would be able to significantly rate their accuracy. Study 1 does not address research question 4, since it doesn't involve repeating novel sentences. <br />
<br />
===Independent Variables===<br />
The study was a within-subjects design, with the repetition number as the independent variable. So, we tracked fluency across the four repetitions of each sentence. We also varied the length of the sentences the students heard. The sentences were between four and 19 words, with an average of 8.42 words, and between 9 and 31 syllables, with an average of 15.84 words. <br />
<br />
===Dependent Variables===<br />
<br />
In this study, we use three measurements of fluency: pre-speech pause (the amount of time before the student starts speaking), articulation time (the amount of time it takes the student to say the sentence from start to finish) and the number and type of errors and corrections the students make. <br />
<br />
===Results===<br />
<br />
First, we discovered a linear relationship between the trial number (1 through 4) and the duration of the utterance (F=4.318, p=0.038). We measured this by looking both at the time between when they started speaking to when they completed the repetition, and in the initial pause, the time between when the audio stimulus ended and they started speaking. The initial pause, the amount of time before the participant started to repeat the sentence after hearing it, decreased significantly as well (F=3.204, p = 0.023). <br />
<br />
[[Image:Duration.jpg]]<br />
<br />
We also discovered that across attempts, the number of correctly repeated sentences increased, and the number of incomplete sentences (ones they could not successfully repeat) decreased significantly. We also found that across attempts participants had significantly fewer missing words and different wordings (where the repetition kept the same meaning as the original but with different wording). Doing a trends analysis, we also found significant linear relationships for the number of repetitions/corrections and wrong article usages. However, contrary to what we expected, we found that in both of these cases the number of repetitions/corrections and wrong articles actually increased across attempts.<br />
<br />
We also wanted to determine the extent to which students are aware of their own speech and whether they are able to accurately rate their own performance. To determine this, we looked at whether the time taken to repeat the sentence and a number of different errors correlated with their rating of their own speech. First we looked at the duration of the utterance, and found a significant correlation, with a rating of 3 having the longest mean duration of utterance and 7 having the lowest. Ratings of 1 and 2 had shorter durations, because ratings of 1 and 2 generally indicated that they were unable to repeat the sentence, leading to shorter, incomplete sentences. Second, we looked at whether students who rated their proficiency as being higher made fewer errors in their speech. We found that a) students who failed to complete the sentence could reliably rate their performance as a 1 or 2, and b) students with fewer errors rated their performance as higher than those who made more errors. This finding held true for all types of errors except grammatical gender errors. Students did not seem sensitive to grammatical gender errors, and were not more likely to rate their performance as lower.<br />
<br />
===Explanation===<br />
<br />
==Study Two== <br />
<br />
In Study Two, in addition to hearing the sentence spoken aloud, students also see pictures that depict the sentence they hear. This way, in the training phase they both see pictures and hear the sentence they repeat, but in the testing phase they can produce the sentences without hearing them ahead of time. This ensures that their speech is not relying on echoic memory, but actually requires them to retrieve lexical and morphological information as they speak. <br />
<br />
Students receive training on two constructions: the subjunctive (ex. "Yo dudo que tu estudies"- "''I doubt that you are studying''") and the preterit/imperfect contrast (ex. "Ayer/De joven tu conduciste/conducías un carro y yo saqué/sacaba fotos. - "''Yesterday/As a child you drove/drove a car and I took/took pictures''"). Neither of these constructions exist in English- the subjunctive case is not marked and there is no distinction between the preterit and imperfect past tense. Furthermore, both of these constructions contain two phrases, which can be trained either as one whole unit, or broken up into two separate units. <br />
<br />
Study Two will further investigate whether it is more effective to train students using the sentence as a whole unit, or through separate phrases. For example, in the subjunctive sentences, students will either be trained on the whole thing, or on two separate phrases, "Yo dudo que-" and "que tu estudies". Doing this may potentially increase learning for two reasons: first, breaking the sentence into pieces will lower working memory constraints, increasing performance on the task; and second, using pieces may decrease cognitive load, thus freeing up more resources for learning.<br />
<br />
Study Two involves three phases: the Practice phase, the Immediate Post-test, and the Delayed Post-test. During the training phase, they will see pictures and hear a sentence that that describes those pictures. They will have six blocks of training, three in each construction, each consisting of 7 sentences (or 14 phrases in the Phrase condition). After the training, they move on to the Immediate Post-test phase, where they see pictures and create the sentences without hearing them first. The test phase consists of 42 sentences, 21 they had practiced during the training phase and 21 novel sentences, presented in random order. The Delayed Post-test is exactly like the Immediate Post-test, but in a different order. <br />
<br />
===Hypotheses===<br />
<br />
1. Practice: Does one practice condition lead to more improvement in fluency (in terms of correct usage and lower duration/initial pause?)<br />
<br />
2. Test: Does one practice condition help learner to produce similar sentences more fluently when they are producing the sentences on their own? <br />
<br />
3. Robustness: Does the practice have long-term effects on learners’ oral production?<br />
<br />
4. Generalizeability: Is improvement limited to specific practiced sentences, or can the learners generalize to novel, similar sentences?<br />
<br />
===Independent Variables===<br />
<br />
Using two constructions will, to a certain extent, allow a within-subjects design. Each participant will receive training in one condition on one sentence construction, and in the other on the other construction. <br />
<br />
There are two conditions: the Phrase condition and the Sentence condition. In the Sentence condition, learners will practice the sentences as a whole; in the Phrase condition, the sentences are split into two phrases which are practiced separately. <br />
<br />
While it is possible to compare the two conditions as a within-subjects design, the two sentences are very different in nature, and lead to a very different pattern of results. So, in reporting the results we will treat each sentence construction as a separate experiment.<br />
<br />
===Dependent Variables===<br />
<br />
The dependent variables in this sentence are the same as in Study One: we are measuring fluency, in terms of pre-sentence pause, articulation time, and errors. We calculated articulation time (mean duration of utterance) as the time between when the speaker started speaking to when they finished the sentence. In cases where the speaker failed to finish the sentence, we set the duration as 15 seconds, the maximum amount of time alloted for the recording. Since during the practice phase, sentences are intrinsically longer than phrases, we normalized this duration (D) by dividing the learner's D by the native speaker's duration, allowing us to look at the duration as a ratio (D-ratio). So, the learner's production is more native-like when the value is close to 1; the greater the D-ratio, the more time the learner took compared to the native speaker, and the less native-like the repetition. <br />
<br />
In addition to looking at the duration, we also looked at the initial pause (IP), the amount of time the learner took before he or she began speaking. This may be an indication of pre-speech planning; thus, the longer the speaker waits before he or she starts speaking, the more time he or she needed to process and formulate the sentence. Thus, more native-like performance will have a shorter IP. <br />
<br />
Finally, we looked at the number of errors, repetitions, and corrections the learners made as they repeated the sentences. We counted a repetition as the learner repeating a phoneme, word, or phrase without correcting previous speech, and a correction as a repetition that made a correction on a previous utterance. We also coded the errors according to the type of error made. However, for the purposes of analysis, we will lump all errors together. For the purposes of this analysis, we will look at uncorrected errors per sentence, which is the total number of errors minus corrections.<br />
<br />
===Results===<br />
<br />
''Practice''<br />
<br />
Our first question was whether participants improved across the practice trials, and whether one condition led to more improvement or more native-like repetition. For our measurement of temporal fluency, the D-ratio, we found significant differences for repetition (F = 39.311, p<0.01), with the third repetition taking significantly less time than the 3rd, and condition (F = 258.821, p<0.02), where the phrase condition improves less than the sentence condition. We found similar patterns of results for initial pause and uncorrected as well. Figures 1 and 2 show D-Ratios across trials for both preterit/imperfect and subjunctive sentences across conditions. <br />
<br />
<br />
[[Image:Image002.jpg]]<br />
<br />
Figure 1: D-Ratio for preterit/imperfect sentences across practice trials. <br />
<br />
<br />
[[Image:Image006.jpg]]<br />
<br />
Figure 2: D-Ratio for subjunctive sentences across practice trials. <br />
<br />
Note that, while there is less improvement in the phrase condition, production is more native-like in this condition (that is, the D-Ratio is closer to 1). So, while the Phrase condition leads to less improvement, it allows for more native-like improvement. <br />
<br />
''Test''<br />
<br />
Next, we want to see whether the type of training makes a difference during the test phase, when they are producing the sentences on their own. Here, we found a different pattern of results based on the type of sentence. <br />
<br />
For preterit/imperfect sentences, people who practiced in the Sentence condition had significantly shorter durations, shorter IPs and fewer errors than the Phrase condition. This is especially true at the delayed post-test, though there are no significant differences between immediate and delayed post-test for either condition. <br />
<br />
[[Image:Image022.jpg]]<br />
<br />
Figure 3: Mean number of errors per sentence for preterit/imperfect sentences at immediate and delayed post-tests. <br />
<br />
However, for subjunctive sentences there is a different pattern of results. For these sentences, which are shorter but more complex, while the sentence condition does better than the Phrase condition during the immediate post-test, at the delayed post-test, the Phrase condition does significantly better. In fact, the Phrase condition improves significantly by the delayed post-test, while the Sentence condition gets significantly worse. <br />
<br />
[[Image:Image024.jpg]]<br />
<br />
Figure 4: Mean number of errors per sentence for subjunctive sentences at immediate and delayed post-tests. <br />
<br />
''Robustness''<br />
<br />
Next, we wanted to see whether the training had any long-term effects. Looking at the results of the 2 (Repetition) by 2 (Condition) univariate ANOVA performed in the Test section, we can see that the long-term effects vary by sentence type.<br />
<br />
For the preterit/imperfect sentences (Figure 3), we can see no significant main effect of Repetition, and no interaction of Repetition and Condition. So, for these sentences, it appears that whatever effects of the training there are, they are still present a week later. <br />
<br />
However, for the subjunctive sentences (Figure 4) there is a rather interesting interaction. As mentioned in the above section, the Phrase condition performs significantly worse at the immediate post-test, but improves by the delayed post-test, while the Sentence condition sees significant decay between the immediate and delayed post-test. So, while the Phrase condition appears to lead to long-term improvements, the Sentence condition does not.<br />
<br />
''Generalizeability''<br />
<br />
Finally, we wanted to see whether the training led to generalizeable learning, or whether the training simply allowed students to improve vocalization of the sentences on which they had been trained. To do this, we did a one-way ANOVA for Novelty (novel or trained). We found a significant effect of novelty for both duration of utterance (F = 14.571, p<0.01) and number of errors per sentence (F = 4.306, p = 0.038), with novel sentences taking longer to produce and containing more errors than trained sentences. However, a two (Condition) by two (Novelty) ANOVA found no interaction between Condition and Novelty, showing that neither condition seemed to lead to more generalizeable learning.<br />
<br />
==Study 3==<br />
<br />
Study 3 serves first to investigate the differences between different speech elicitation methods. We are first comparing the picture task used in Study 2 to the oral repetition task used in Study 1, while also incorporating a pre- and post-test with multiple testing methods as well as working memory span tasks and individual differences tasks to a) further investigate the use of oral repetition in developing second language fluency and b) see whether using pictures adds anything to the oral repetition task. <br />
<br />
Study 3 also includes a series of pre- and post-tests. This will allow us to determine a) whether participants are actually improving and b) what skills are being training by the tasks. <br />
<br />
===Independent Variables===<br />
<br />
This study has two conditions: Picture training and Repetition training. Picture training is identical to Study 2: they see pictures and hear a sentence that describes those pictures, then repeat it back. They hear each sentence and repeat it four times. Repetition training is identical to Picture training, but they do not see the pictures while they hear the sentence. <br />
<br />
As in Study 2, this study uses a within-subjects design, where all participants receive both kinds of training. However, rather than splitting up the training by sentence type, it is split up by verb. So, participants will receive Picture training with one set of verbs, and Repetition training on another set. They will be tested on both sets of verbs, as well as a third set that was not trained, which serves as a control. <br />
<br />
===Dependent Variables===<br />
<br />
Just like in Study 2, we are looking at temporal measures of fluency, including Initial Pause (IP) and Length of Duration (LD). We are also using a coding scheme almost identical to Study 2 to code repetitions, corrections, and grammatical errors. We will look for changes in temporal and accuracy measures of fluency during both training and testing phases. <br />
<br />
''Test Measures''<br />
<br />
One of the major additions of this study is a series of three test measures that can tap into the different aspects of sentence production. This is different from previous studies in that a) the participants receive both pre- and immediate and delayed post-tests, allowing for comparison before and after training, and b) participants are tested on tasks on which they did not specifically receive training. Below are descriptions of the three tasks they receive:<br />
<br />
# ''Repetition'' This test is identical to the repetition training task: they hear a sentence and repeat it. This will test to see whether they improve simply in their ability to repeat back sentences they hear. It may be the case that successful performance on this task requires lexical retrieval and morphosyntactic processing. However, if participants' performance increases only on this task and not on other tasks that do require extensive processing, it may be the case that participants are only improving on more surface-level sound production. <br />
# ''Word Combination'' In this test, participants see a series of words displayed on the screen and combine those words to create a sentence. There are three words groups: The Cue at the top of the screen, which indicates what tense the sentence should be in (e.g., "Si", "Ayer", etc.), Subj1/Verb1 on the left hand side, which gives the subject and verb of the first half of the sentence, and Subj2/Verb2 on the right hand side, which gives the subject and verb for the second half of the sentence. For example, if they see the word "Si" at the top of the screen, "yo/cocinar la cena" on the left side and "tu/lavar los platos" on the right side, they would create the sentence "Si yo cocino la cena, tu lavarás los platos." As this task removes the need for lexical retrieval, this task will measure whether training led to improvements on using the cues to determine verb tense and conjugate verbs quickly. <br />
# ''Translation'' In this test, participants see a sentence in English and translate it to Spanish. For example, if they see the sentence "Yesterday, we went fishing and you took pictures.", they would say "Ayer nosotros fuimos de pesca y tu sacaste fotos." This task, unlike the Word Combination task, involves both lexical retrieval (through translation) and morphosyntactic processing.<br />
<br />
=== Results===<br />
<br />
Data collection is still in progress, but should be completed by November 2010. Results should be available in Winter 2011.<br />
<br />
==References==<br />
<br />
Levelt, W. J. M. (1989). Speaking. Boston: MIT Press.<br />
<br />
Yoshimura, Y., & MacWhinney, B. (2007). The effect of oral repetition on L2 speech fluency: an experimental tool and language tutor. Paper presented at the Speech and Language Technology in Education, The Summit Inn, Farmington, PA.<br />
<br />
<br />
[http://cvresumewritingservices.org/ resume writing services]</div>Petrachaneyhttps://learnlab.org/wiki/index.php?title=Coordinative_Learning&diff=12160Coordinative Learning2011-08-30T13:44:45Z<p>Petrachaney: </p>
<hr />
<div>= The PSLC Coordinative Learning cluster =<br />
<br />
== Abstract ==<br />
The studies in the Coordinative Learning cluster tend to focus on varying ''a)'' the types of information available to learning or ''b)'' the instructional methods that they employ. In particular, the studies focus on the impact of having learners coordinate two or more types. Given that the student has multiple [[sources]]/methods available, two factors that might impact learning are:<br />
<br />
*What is the relationship between the content in the two sources or the content generated by the two methods? Our hypothesis is that the two sources or methods facilitate [[robust learning]] when a [[knowledge component]] is difficult to understand or absent in one and is present or easier to understand in the other.<br />
*When and how does the student coordinate between the two sources or methods? Our hypothesis is that students should be encouraged to compare the two, perhaps by putting them close together in space or time. <br />
<br />
At the micro-level, the overall hypothesis is that robust learning occurs when the [[learning event space]] has target paths whose [[sense making]] difficulties complement each other (as expressed in the first bullet above) and the students make path choices that take advantage of these [[complementary]] paths (as in the second bullet, above). This hypothesis is just a specialization of the [[Root_node|general PSLC hypothesis]] to this cluster.<br />
<br />
The matrix below shows how studies in this cluster (pages for these studies can be found Descendants section below) either test or make use of various [[instructional method|instructional methods]] or treatments. When a study tests an instructional method a "v" is one shown in the appropriate cell to indicate that that method is '''varied''' in the study, that is, the [[robust learning]] gains of an experimental condition that receives this method are contrasted with those of an otherwise equivalent control condition that does not receive this method. In this case (when a "v" is present), the study tests the [[InstructionalPrinciples|instructional principle]] indicated in the column. When a cell contains a "b" it indicates that '''both''' the experimental and control conditions use this instructional method (or employ this instructional principle). In this case, the study is not a true experimental test of the principle.<br />
<br />
<br><center>[[Image:Cl.JPG]]</center><br />
<br />
== Glossary ==<br />
[[:Category:Coordinative Learning|Coordinative Learning]] glossary.<br />
<br />
*'''[[Analogical comparison]]'''<br />
*'''[[Co-training]]'''<br />
*'''[[Complementary]]'''<br />
*'''[[Conceptual tasks]]''' <br />
*'''[[Contiguity]]'''<br />
*'''[[Coordination]]'''<br />
*'''[[Ecological control group]]'''<br />
*'''[[External representations]]'''<br />
*'''[[Input sources ]]'''<br />
*'''[[Instructional method]]'''<br />
*'''[[Multimedia sources]]'''<br />
*'''[[Procedural tasks]]''' <br />
*'''[[Self-explanation]]'''<br />
*'''[[Self-supervised learning]]'''<br />
*'''[[Sources]]'''<br />
*'''[[Strategies]]'''<br />
*'''[[Unlabeled examples]]'''<br />
<br />
== Research questions ==<br />
<br />
When and how does coordinating multiple sources of information or lines of reasoning increase robust learning?<br />
<br />
Two sub-groups of coordinative learning studies are exploring these more specific questions:<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
<br />
When does adding visualizations or other multi-modal input enhance robust learning and how do we best support students in coordinating these sources?<br />
<br />
=== Examples and Explanations ===<br />
<br />
When and how should example study be combined and coordinated with problem solving to increase robust learning? When and how should explicit explanations be added or requested of students before, during, or after example study and problem solving practice?<br />
<br />
== Independent variables ==<br />
<br />
*Content of the sources (e.g., pictures, diagrams, written text, audio, animation) or the encouraged lines of reasoning (e.g., example study, self-explanation, conceptual task, procedural task) and combinations<br />
<br />
*Instructional activities designed to engage students in [[coordination]] (e.g., conceptual vs. [[procedural]] exercises, contiguous presentation of sources, [[self-explanation]])<br />
<br />
See [[:Category:Independent Variables]]<br />
<br />
== Dependent variables ==<br />
[[Normal post-test]] and measures of [[robust learning]].<br />
<br />
== Hypotheses ==<br />
When students are given sources/methods whose [[sense making]] difficulties are complementary and they are engaged in coordinating the sources/methods, then their learning will be more robust than it would otherwise be.<br />
<br />
== Explanation ==<br />
<br />
There are both [[sense making]] and [[foundational skill building]] explanations. From the sense making perspective, if the sources/methods yield complementary content and the student is engaged in coordinating them, then the student is more likely to successfully understand the instruction because if a student fails to understand one of the sources/methods, he can use the second to make sense of the first. From a foundational skill building perspective, attending to both sources/methods simultaneously associates [[features]] from both with the learned knowledge components, thus potentially increasing [[feature validity]] and hence [[robust learning]].<br />
<br />
== Descendents ==<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
*[[Contiguous Representations for Robust Learning (Aleven & Butcher)]]<br />
**[[Static vs. Animated Visual Representations for Science Learning (Kaye, Small, Butcher, & Chi)]]<br />
*[[Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)]]<br />
**[[Training Geometry Concepts with Visual and Verbal Sources (Burchfield, Aleven, & Butcher)]]<br />
*[[Visual Representations in Science Learning | Visual Representations in Science Learning (Davenport, Klahr & Koedinger)]]<br />
* Cotraining in language learning<br />
**[[Co-training of Chinese characters| Co-training of Chinese characters (Liu, Perfetti, Dunlap, Zi, Mitchell)]]<br />
**[[Co-training and pairing| The pairing effect in Chinese cotraining (Liu, Perfetti, Dunlap, Wu, Mitchell)]]<br />
*[[Learning Chinese pronunciation from a “talking head”| Learning Chinese pronunciation from a “talking head” (Liu, Massaro, Dunlap, Wu, Chen,Chan, Perfetti)]] [Was in Refinement and Fluency]<br />
*[[Visual Feature Focus in Geometry: Instructional Support for Visual Coordination During Learning (Butcher & Aleven)]]<br />
*[[Learning About Emergence and Heat Transfer (Chi)]]<br />
*[[Sequencing learning with multiple representations of rational numbers (Aleven, Rummel, & Rau)]]<br />
*[[Leverage Learning from Chemistry Visualizations (Ming & Schoenfield)]]<br />
*[[Perceptual Fluency in Geometry Achievement(Kao)]]<br />
<br />
=== Examples and Explanations ===<br />
*[[Booth | Improving skill at solving equations through better encoding of algebraic concepts (Booth, Siegler, Koedinger & Rittle-Johnson)]]<br />
*[[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | Studying the Learning Effect of Personalization and Worked Examples in the Solving of Stoichiometry Problems (McLaren, Koedinger & Yaron)]]<br />
*[[Note-Taking_Technologies | Note-taking Project Page (Bauer & Koedinger)]]<br />
**[[Note-Taking: Restriction and Selection]] (completed)<br />
**[[Note-Taking: Coordination]] (planned)<br />
*[[REAP_main | The REAP Project: Implicit and explicit instruction on word meanings (Juffs & Eskenazi)]]<br />
*[[Help_Lite (Aleven, Roll)|Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content (Aleven & Roll)]]<br />
*[[Handwriting Algebra Tutor]] (Anthony, Yang & Koedinger)<br />
**[[Lab study proof-of-concept for handwriting vs typing input for learning algebra equation-solving]] (completed)<br />
**[[Effect of adding simple worked examples to problem-solving in algebra learning]] (completed, analysis in progress)<br />
**[[In vivo comparison of Cognitive Tutor Algebra using handwriting vs typing input]] (in progress)<br />
*[[Bridging_Principles_and_Examples_through_Analogy_and_Explanation | Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn)]]<br />
*[[Does learning from worked-out examples improve tutored problem solving? | Does learning from worked-out examples improve tutored problem solving? (Renkl, Aleven & Salden)]]<br />
*[[Ringenberg_Examples-as-Help | Scaffolding Problem Solving with Embedded Example to Promote Deep Learning (Ringenberg & VanLehn)]]<br />
* [[The_Help_Tutor__Roll_Aleven_McLaren|Tutoring a meta-cognitive skill: Help-seeking (Roll, Aleven & McLaren)]]<br />
*[[Roll_IPL | Invention as Preparation for Learning (Roll, Aleven, Koedinger & Schwartz)]]<br />
*[[Baker_Choices_in_LE_Space | How Content and Interface Features Influence Student Choices Within the Learning Space (Baker, Corbett, Koedinger, & Rodrigo)]]<br />
*[[Mayer_and_McLaren_-_Social_Intelligence_And_Computer_Tutors | McLaren and Mayer - Social Intelligence and Learning from "polite" tutors]]<br />
<br />
== Annotated Bibliography ==<br />
Much research in human and machine learning research has advocated various kinds of “multiples” to assist learning: <br />
* multiple data sources (e.g., human learning (HL): Mayer, 2001; machine learning (ML): Blum & Mitchell, 1998; Collins & Singer, 1999). <br />
* multiple representations (e.g., HL: Ainsworth & Van Labeke, 2004; ML: Liere & Tadepalli, 1997), <br />
* multiple strategies (e.g., HL: Klahr & Siegler, 1978; ML: Michalski & Tecucci 1997; Saitta, Botta, & Neri, 1993); <br />
* multiple learning tasks (e.g., HL: Holland, Holyoak, Nisbett, & Thagard, 1986; ML: Caruana, 1997; Case, Jain, Ott, Sharma, & Stephan, 1998); <br />
<br />
Experiments in human learning have demonstrated, for instance, that instruction that combines rules or principles and [[example]]s yields better results than either alone (Holland, Holyoak, Nisbett, & Thagard, 1986) or, for instance, iterative instruction of both [[Procedural tasks|procedures]] and [[Conceptual tasks|concepts]] better learning (Rittle-Johnson & Koedinger, 2002; Rittle-Johnson, Siegler, & Alibali, 2001). See also the [http://www.psyc.memphis.edu/learning/principles/lp5.shtml variable learning principle.]<br />
<br />
Experiments in machine learning have demonstrated how more robust, generalizable learning can be achieved by training a single learner on ''multiple'' related tasks (Caruana 1997) or by training ''multiple'' learning systems on the same task (Blum & Mitchell 1998; Collins & Singer 1999; Muslea, Minton, & Knoblock, 2002). Blum and Mitchell (1998) provide both empirical results and a proof of the circumstances under which strategy combinations enhance learning. In particular, the [[co-training]] approach for combining multiple learning strategies yields better learning to the extent that the learning strategies produce “uncorrelated errors” – when one is wrong the other is often right. As an example of PSLC work, Donmez et al. (2005) demonstrate, using a multi-dimensional collaborative process analysis, that regularities across ''multiple'' codings of the same data can be exploited for the purpose of improving text classification accuracy for difficult codings.<br />
<br />
An ambitious goal of PSLC is provide a rigorous causal theory of human learning results at the level of precision of machine learning research. <br />
<br />
* Ainsworth, S., Bibby, P., & Wood, D. (2002). Examining the effects of different multiple representational systems in learning primary mathematics. The Journal of the Learning Sciences, 11(1), 25–61.<br />
* Ainsworth, S.E. & Van Labeke (2004) Multiple forms of dynamic representation. Learning and Instruction, 14(3), 241-255. <br />
* Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 92–100). New York: ACM Press. Available: citeseer.nj.nec.com/blum98combining.html<br />
* Caruana, R. (1997). Multitask learning. Machine Learning 28(1), 41-75. Available: citeseer.nj.nec.com/caruana97multitask.html.<br />
* Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (1998). Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 44-55). New York: ACM Press.<br />
* Collins, M., & Singer, Y. (1999). Unsupervised models for named entity classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (pp. 189–196).<br />
* Donmez, P., Rose, C. P., Stegmann, K., Weinberger, A., and Fischer, F. (2005). Supporting CSCL with Automatic Corpus Analysis Technology, to appear in the Proceedings of Computer Supported Collaborative Learning.<br />
* Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press.<br />
* Klahr D., and Siegler R.S. (1978). The Representation of Children's Knowledge. In H.W. Reese and L.P. Lipsitt (Eds.), Advances in Child Development and Behavior, Academic Press, New York, NY, pp. 61-116.<br />
* Liere, R., & Tadepalli, P. (1997). Active learning with committees for text categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence (pp. 591—596). Menlo Park, CA: AAAI Press.<br />
* Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.<br />
* Michalski, R., & Tecuci, G. (Eds.) (1997). Machine learning: A multi-strategy approach. Morgan Kaufmann.<br />
* Muslea, I., Minton, S., & Knoblock, C. (2002). Active + semi-supervised learning = robust multi-view learning. In Proceedings of ICML-2002. Sydney, Australia.<br />
* Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–262.<br />
* Rittle-Johnson, B., & Koedinger, K. R. (2002). Comparing instructional strategies for integrating conceptual and procedural knowledge. Paper presented at the Psychology of Mathematics Education, National, Athens, GA.<br />
* Saitta, L., Botta, M., & Neri, F. (1993). Multi-strategy learning and theory revision. Machine Learning, 11(2/3), 153–172.<br />
<br />
[[Category:Cluster]]<br />
<br />
[http://editingwritingservices.org/article.php article writing service]</div>Petrachaneyhttps://learnlab.org/wiki/index.php?title=Cue_strength&diff=12159Cue strength2011-08-30T13:44:33Z<p>Petrachaney: </p>
<hr />
<div>[[Category:Glossary]]<br />
[[Category:Refinement and Fluency]]<br />
In order to define cue strength, one first has to define the concept of a cue. This has to be done separately in each content domain. A linguistic cue involves a marking of a linguistic function by a linguistic form. In comprehension, the cue is the form and cues compete for assignment to functions. Markings can be of three types: morphological (affixes and intonations), lexical semantics (animacy, classifiers), and syntactic (word order). Cues are used to mark lingusitic functions, such as case role, attachment, or coreference. For each cue, we can assess its strength by placing it in competition with other cues in experiments designed specifically to measure relative cue strength. Assuming a standard within-subjects ANOVA design, strength is then measured by fitting a maximum likelihood estimation (MLE) model to the data. The notion of cue strength can also be applied to other cognitive domains in a parallel fashion.<br />
(Brian MacWhinney)<br />
<br />
[http://cvresumewritingservices.org/ resume writing service]</div>Petrachaneyhttps://learnlab.org/wiki/index.php?title=Corrective_self-explanation&diff=12158Corrective self-explanation2011-08-30T13:44:22Z<p>Petrachaney: </p>
<hr />
<div>==Brief statement of principle==<br />
Explaining how and why incorrect solutions are incorrect will help students to reject incorrect [[knowledge components]] and, thus, stop using incorrect strategies to solve problems.<br />
<br />
Corrective self-explanation is a kind of [[error correction support]] which is a kind of [[instructional method]].<br />
<br />
==Description of principle==<br />
<br />
===Operational definition===<br />
<br />
Corrective self-explanation is [[self-explanation]]s of ''incorrect'' [[worked examples]]; explaining how and why they are incorrect. See the study by [[Booth]].<br />
<br />
<br />
===Examples===<br />
[[Booth]]'s Corrective self-explanation exercises: [[Image:CSE3.jpg]]<br />
<br />
==Experimental support==<br />
<br />
===Laboratory experiment support===<br />
<br />
Siegler (2002) found that having students self-explain incorrect answers as well as correct answers increased learning of mathematical equality problems than explaining only correct answers. <br />
<br />
Siegler & Chen (in press) also found that asking children to explain both why correct answers were correct and why incorrect answers were incorrect was more effective for learning to solve water displacement problems than only requesting explanations of correct answers. <br />
<br />
===In vivo experiment support===<br />
<br />
Preliminary results from [[Booth]] suggest that while completing any typical or corrective self-explanation exercises improve procedural performance for solving algebraic equations (Booth, Koedinger, & Siegler, 2007), corrective self-explanation may uniquely improve conceptual knowledge about features in the equations. <br />
<br />
==Theoretical rationale== <br />
As a form of [[self-explanation]], corrective self-explanation works via making knowledge explicit. Unlike typical [[self-explanation]], however, corrective self-explanation focuses on making explicit 1) ''that'' a given [[knowledge component]] is wrong and 2) ''why'' the [[knowledge component]] is wrong (what [[features]] about the situation make the knowledge inappropriate.<br />
<br />
==Conditions of application==<br />
1. Corrective self-explanation is likely only useful when students also recieve experience that will facilitate [[construction]] of correct [[knowledge components]]. If students recieve only corrective self-explanation, they may come to reject their incorrect [[knowledge components]], but if they have nothing to replace them with, they will either flounder (having no way to solve the problem), or revert to use of the only strategy they know, even though they know it is incorrect.<br />
<br />
2. Grosse & Renkl (in press) also show that explaining incorrect examples is difficult for poor learners if the step where the error occurred is not highlighted. If this is not made clear, students have difficulty detecting and explaining the error.<br />
<br />
==Caveats, limitations, open issues, or dissenting views==<br />
==Variations (descendants)==<br />
==Generalizations (ascendants)==<br />
[[Prompted Self-explanation]]<br />
<br />
==References==<br />
* Booth, J.L., Paré-Blagoev, J. & Koedinger, K.R. (2010). Transforming equation-solving assignments to improve algebra learning: A collaboration with the SERP-MSAN Partnership. Paper to be presented at the annual meeting of the ''American Educational Research Association''.<br />
<br />
* Booth, J.L., Koedinger, K.R., & Siegler, R.S. (2007, October). The effect of corrective and typical self-explanation on algebraic problem solving. Poster presented at the Science of Learning Centers Awardee’s Meeting in Washington, DC.<br />
<br />
* Große, C. S., & Renkl, A. (2007). Finding and fixing errors in worked examples: Can this foster learning outcomes? ''Learning and Instruction, 17'', 612-634.<br />
<br />
* Rittle-Johnson, B. (2006). Promoting transfer: Effects of self-explanation and direct instruction. ''Child Development, 77'', 1–29.<br />
<br />
* Siegler, R. S., & Chen, Z. (2008). Differentiation and integration: Guiding principles for analyzing cognitive change. ''Developmental Science, 11'', 433-448.<br />
<br />
* Siegler, R. S. (2002). Microgenetic studies of self-explanations. In N. Granott & J. Parziale (Eds.), ''Microdevelopment: Transition processes in development and learning'' (pp. 31-58). New York: Cambridge University.<br />
<br />
[[Category:Glossary]]<br />
[[Category:Instructional Principle]]<br />
<br />
<br />
[[Category:Glossary]]<br />
[[Category:Independent Variables]]<br />
[[Category:Booth]]<br />
<br />
<br />
[http://editingwritingservices.org/ dissertation editing]</div>Petrachaneyhttps://learnlab.org/wiki/index.php?title=Contiguous_Representations_for_Robust_Learning_(Aleven_%26_Butcher)&diff=12157Contiguous Representations for Robust Learning (Aleven & Butcher)2011-08-30T13:44:10Z<p>Petrachaney: </p>
<hr />
<div>== Learning with Diagrams in Geometry: Strategic Support for Robust Learning ==<br />
''Vincent Aleven and Kirsten Butcher''<br />
<br />
=== Summary Table ===<br />
====Study 1====<br />
{| border="1" cellspacing="0" cellpadding="5" style="text-align: left;"<br />
| '''PIs''' || Vincent Aleven & Kirsten R. Butcher<br />
|-<br />
| '''Other Contributers''' || <b>Graduate Students:</b> Andy Tzou (CMU HCII)<br><br />
<b>Research Programmers/Associates:</b> Octav Popescu (Research Programmer, CMU HCII), Grace Lee Leonard (Research Associate, CMU HCII), Thomas Bolster (Research Associate, CMU HCII)<br />
<br />
|-<br />
| '''Study Start Date''' || January 24, 2006<br />
|-<br />
| '''Study End Date''' || February 22, 2006<br />
|-<br />
| '''LearnLab Site''' || Central Westmoreland Career & Technology Center (CWCTC)<br />
|-<br />
| '''LearnLab Course''' || Geometry<br />
|-<br />
| '''Number of Students''' || 65<br />
|-<br />
| '''Total Participant Hours''' || 390<br />
|-<br />
| '''Data available in DataShop''' || <br />
[https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=80 Dataset: Contiguity CWCTC Winter 2006]<br><br />
* '''Pre/Post Test Score Data:''' No<br />
* '''Paper or Online Tests:''' Paper<br />
* '''Scanned Paper Tests:''' No<br />
* '''Blank Tests:''' No<br />
* '''Answer Key: ''' No<br />
|}<br />
<br><br />
<br />
====Study 2====<br />
{| border="1" cellspacing="0" cellpadding="5" style="text-align: left;"<br />
| '''PIs''' || Vincent Aleven & Kirsten R. Butcher<br />
|-<br />
| '''Other Contributers''' || <b>Graduate Students:</b> Andy Tzou (CMU HCII), Carl Angioli (CMU HCII), Michael Nugent (Pitt, Computer Science)<br><br />
<b>Research Programmers/Associates:</b> Octav Popescu (Research Programmer, CMU HCII), Grace Lee Leonard (Research Associate, CMU HCII), Thomas Bolster (Research Associate, CMU HCII)<br />
<br />
|-<br />
| '''Study Start Date''' || April 28, 2006<br />
|-<br />
| '''Study End Date''' || May 26, 2006<br />
|-<br />
| '''LearnLab Site''' || Central Westmoreland Career & Technology Center (CWCTC)<br />
|-<br />
| '''LearnLab Course''' || Geometry<br />
|-<br />
| '''Number of Students''' || 130<br />
|-<br />
| '''Total Participant Hours''' || 780<br />
|-<br />
| '''Data available in DataShop''' || <br />
[https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=79 Dataset: Contiguity CWCTC Spring 2006]<br><br />
* '''Pre/Post Test Score Data:''' No<br />
* '''Paper or Online Tests:''' Paper<br />
* '''Scanned Paper Tests:''' No<br />
* '''Blank Tests:''' No<br />
* '''Answer Key: ''' No<br />
|}<br />
<br><br />
<br />
=== Abstract ===<br />
Does integration of visual and verbal knowledge during learning support deep understanding? Can student interactions with visual information during problem-solving support [[robust learning]]? The overall goal of this project is to gain a better understanding of 1) visual and verbal [[knowledge components]] in a problem-solving environment and, 2) how interacting with visual information can support the development of deep understanding. Ultimately, we are interested in [[coordination]] and [[integration]] processes in learning with visual and verbal [[knowledge components]], and how these processes may support [[robust learning]].<br />
<br />
We are using the Geometry Cognitive Tutor as a research vehicle for our project. In geometry, visual information is represented in a problem diagram and verbal/symbolic information is represented in text that contains given and goal information as well as in conceptual rules/principles of geometry. The research described here investigates whether [[implicit instruction]] (via direct interaction with visual information during learning) can support [[robust learning]] through [[Visual-verbal coordination|visual-verbal coordination]] during learning. This [[implicit instruction]] is achieved via interactive instructional events in an intelligent tutoring environment, where students receive feedback on error and perform a simple (menu-based) form of [[self-explanation]] during practice.<br />
<br />
=== Background & Significance ===<br />
In this research, we draw upon previous work in learning with [[multimedia sources]], [[self-explanation]]s, and Cognitive Tutors. We hypothesize that two key cognitive processes support integrated knowledge development and [[robust learning]] when using visual and verbal representations. These processes are: 1) Successful [[mapping]] between visual and verbal information, and 2) [[Integration]] processes that combine visual and verbal representations into integrated [[knowledge components]]. Previous research has suggested that contiguous representations -- those that provide close temporal and physical proximity between visual and verbal elements during learning -- can support understanding of multimedia materials (e.g., Mayer, 2001); these benefits have been hypothesized to result from the easing of cognitive load required for [[mapping]] between visual and verbal information. <br />
<br />
We are investigating if these benefits can be seen during real classroom learning when students engage in extended practice with learning materials. Our research examines the potential benefits of [[contiguity]] during intelligent tutoring for robust learning in classroom environments. <br />
<br />
We hypothesize that [[implicit instruction]] that supports interaction with visual information will support [[coordination]] between and [[integration]] of visual and verbal information, promoting [[robust learning]] as measured by knowledge [[retention]] and [[transfer]]. <br />
<br />
By [[coordination]], we mean the processes that support [[mapping]] between relevant visual and verbal information as well as the processes that keep relevant [[knowledge components]] active. For example, in geometry a student needs to map between text references to angles and their location in a diagram and will need to maintain the numerical (given or solved) value of that angle to use in problem solving. By [[visual-verbal integration]], we mean knowledge construction events that involve generating a representation that includes both visual and verbal knowledge components. For example, in geometry a student may need to construct an understanding of linear angles that includes both a verbal definition (e.g., “two adjacent angles that form a line”) and a visual situation description (e.g., a visual representation of the two angles formed by intersection of a line).<br />
<br />
In the context of the Geometry Cognitive Tutor, [[contiguity]] is achieved by placing related representations, such as a diagram and a workspace in which answers are entered, in close proximity that reduces (and in some cases, removes) the need for [[mapping]] between visual and verbal information. Although contiguous representations may reduce the initial cognitive load associated with [[mapping]] between representations, cognitive load demands may be less influential in classroom environments where practice is extended and distributed (Olina, Reiser, Huang, Lim, & Park, 2006). Thus, we assume that contiguous representations can support robust learning by promoting [[integration]] of visual and verbal information during practice. That is, [[contiguity]] may support students' connection between and [[integration]] of visual and verbal information leading to more robust knowledge of geometry principles. If these assumptions are true, we would expect to see similar performance on practiced problems for students who trained with [[Contiguous Representation|contiguous]] vs. noncontiguous representations. However, we would expect students using the contiguous representations to demonstrate better knowledge [[transfer]].<br />
<br />
=== Glossary ===<br />
See [[:Category:Visual-Verbal Learning (Aleven & Butcher Project)|Visual-Verbal Learning Project Glossary]]<br />
<br />
=== Research questions ===<br />
#Do [[Contiguous Representation|contiguous representations]] in geometry support students' [[Retention|retention]] and [[transfer]] of [[knowledge components]]?<br />
#Are the effects of [[Contiguous Representation|contiguous representations]] stronger for [[transfer]] than for [[retention]]?<br />
<br />
=== Dependent variables ===<br />
*Pretest, [[normal post-test]], and [[transfer]] test measuring student performance on:<br />
**Problem-solving items isomorphic to the practiced problems ([[normal post-test]])<br />
**Complex and demanding problem-solving items unlike those seen during problem practice ([[transfer]])<br />
<br />
*Log data collected during tutor use, used to assess:<br />
**Learning curves<br />
**Time on task<br />
**Error rates<br />
**Latency of responses<br />
<br />
*(Planned) Log data collected during subsequent tutor use, will use to assess:<br />
**[[Accelerated future learning]] <br />
***(Note: Not available for studies conducted in "Circles" unit of the Geometry Cognitive Tutor, since the Circles unit is completed at the end of the school year.)<br />
<br />
=== Independent Variables ===<br />
*Contiguity of Representation<br />
:''Contiguous representation (students work in diagram) vs. Non-contiguous representation (students work in separate table)''<br />
<br />
Figure 1. Noncontiguous representation: Screen shot of tutor interface.<br><br />
[[Image:Butcher_TableScreenShot2.jpg]]<br />
<br />
Figure 2. Contiguous representation: Screen shot of tutor interface.<br><br />
[[Image:Butcher_DiagramScreenShot.jpg]]<br />
<br />
=== Hypotheses ===<br />
<br />
*Contiguous representations increase strategic inferences and [[integration]] of visual and verbal [[knowledge components]] during problem-solving. The resulting [[visual-verbal integration | Visual-verbal Integration]] will support deep learning as evidenced by transfer items the require joint analysis with geometry principles and diagrams.<br />
<br />
=== Findings ===<br />
<br />
Current findings suggest that interaction with visual representations during problem-solving supports deep [[transfer]] during learning. <br />
<br />
====Study 1 (In Vivo, Geometry Cognitive Tutor) ====<br />
*Summary<br />
**In Vivo Study: 10th grade geometry classes in rural Pennsylvannia school<br />
**Domain: Angles curriculum in the Geometry Cognitive Tutor<br />
**Grade-matched pairs of students were randomly assigned to one of two conditions:<br />
***Diagram (Contiguous) Condition: Students interacted directly with geometry diagrams and accepted answers are displayed directly in the diagram<br />
***Table (Noncontiguous) Condition: Students work separate from the diagrams, in a distally located table<br />
<br />
*Findings<br />
**No overall effect of experimental condition on students' performance on geometry answers or reasons at posttest<br />
**Although working in the Diagram condition improved lower-knowledge students' explanations at posttest, higher-knowledge students performed best when working in the Table condition. The result was evidenced by a significant 3-way interaction of Test Time (Pre- vs. Posttest) X Condition (Table vs. Diagram) X Prior Knowledge (Higher vs. Lower) for students' performance on geometry rules at posttest (F(1,39) = 6.2, p < .02).<br />
<br />
====Study 2 (In Vivo, Geometry Cognitive Tutor) ====<br />
*Summary<br />
**In Vivo Study: 10th grade geometry classes in rural Pennsylvannia school<br />
**Domain: Circles curriculum in the Geometry Cognitive Tutor<br />
**Assessment was expanded to include not only answers and explanations for problem-solving items (as in Study 1), but also explanations on deep transfer items (explanations of unsolvable problems) and non-numerical reasoning items (true/false items that require students to judge whether a geometry rule is appropriate to relate named diagram elements).<br />
**Grade-matched pairs of students were randomly assigned to one of two conditions:<br />
***Diagram (Contiguous) Condition: Students interacted directly with geometry diagrams and accepted answers are displayed directly in the diagram<br />
***Table (Noncontiguous) Condition: Students work separate from the diagrams, in a distally located table<br />
<br />
*Findings<br />
**Problem-solving: No condition differences for numerical answers (F(1, 89) = 1.03, p > .3) or explanations for solvable problems (F(1, 89) <1).<br />
**Deep Transfer Explanations: There was a significant effect of condition on students' explanations of unsolvable problems (F(1, 89) = 4.1, p = .046). Students in the Diagram (Contiguous) condition explained unsolvable problems better (M = .13, SE = .03) than students in the Table (Noncontiguous) condition (M = .06, SE = .02).<br />
<br />
<br><br />
Figure 3. Mean performance on explanations for unsolvable problems by experimental condition, at pre- and posttest.<br><br />
[[Image:Butcher_UnsolvableExplanations.jpg]]<br />
<br />
*True/False items: Although there were no condition difference for performance on "true" items (F(1,89) = 2.4, p = .13), students in the Diagram (Contiguous) condition better recognized and explained false answers at posttest (F (1, 89) = 4.3, p = .04). That is, students from both conditions were equally able to recognize statements that gave valid relationships between geometry rules and diagram elements (Diagram, M = .71, SE = .04; Table, M = .72, SE = .03). However, students who interacted with diagrams during practice were better able to recognize when and explain why given geometry rules were inappropriate to relate named diagram elements (M = .23, SE = .02) than students who worked separately from diagrams during practice (M = .17, SE = .02).<br />
<br />
<br><br />
Figure 4. Mean performance on recognizing/explaining inappropriate applications of geometry rules, by experimental condition at pre- and posttest.<br />
[[Image:Butcher_FalseExplanations.jpg]]<br />
<br />
=== Explanation ===<br />
<br />
The deep [[transfer]] benefits seen in Experiment 2 suggest that contiguous representations may help students [[Integration|integrate]] visual and verbal [[knowledge components]] during learning. From a Coordinative Learning perspective, the contiguous tutor interface provides [[implicit instruction]]al support for [[coordination]] of visual-verbal knowledge during tutored problem solving. Although the same diagram (an implicit/passive form of instruction) is present in both the contiguous and the noncontiguous representations, active interaction with the diagram (an active/implicit form of instruction) supports knowledge [[transfer]] following tutored practice. Active integration may cause students to attend to both representations simultaneously and thereby better distinguish relevant from irrelevant features. Enhanced attention to both representations may facilitate a process like [[co-training]]: Through easier [[coordination]] of feature interpretations across the visual and verbal representations, the student may be more likely to prune irrelevant features (e.g., the apparent size of an angle) that may be absent or inconsistent across representations and notice relevant features (e.g., the given geometric constraints on an angle) that may be present or consistent across representations. Such instructional facilitation of [[coordination]] should increase [[feature validity]] of [[knowledge components]] and promote [[robust learning]].<br />
<br />
Although we cannot rule out the possibility that contiguous representations may support [[mapping]] between visual and verbal information in problem-solving, we see little evidence for substantial performance-based effects of mapping support on our [[normal post-test]]. All students performed equally well on trained problem-solving skills. Especially for higher-knowledge learners, interactive tutored practice may support mapping sufficiently to promote at least near-term [[retention]] of [[knowledge components]].<br />
<br />
In terms of the micro-level of the theoretical framework, the contiguous representations should reduce the effort of deep learning paths in the [[learning event space]] by supporting strategic inferences and reasoning directly with the diagram. Our data may also suggest that contiguous representations can have a learning path effect: students who are able to reason directly with diagram representations may attend more closely to the geometric features and relations to which geometry principles apply. This could impact meaningful learning by increasing [[feature validity]] of the visual and verbal [[knowledge components]].<br />
<br />
===Further Information===<br />
==== Connections ====<br />
<b>Interactive Communication as Support for Visual-Verbal Integration</b>:<br>Our research is investigating multiple methods with which student learning can be supported by interactions with pictorial information during geometry learning -- see also our work on Integrated Hints in geometry: [[Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)]]. However, our work also includes more a more explicit method for supporting student integration visual and verbal knowledge components. This method involves interactive support for students' [[Elaborated Explanations | elaborated explanations]] during geometry learning. Research investigating this explicit support is part of the [[Interactive Communication]] Cluster: [[Using Elaborated Explanations to Support Geometry Learning (Aleven & Butcher)]]<br />
<br />
<b>Visual Representations for Robust Learning in Other Domains</b>: Our efforts to support students' integration of visual and verbal knowledge are informed by and related to efforts investigating the use of visual representations to support [[robust learning]] in other domains. A closely related PSLC project is [[Visual Representations in Science Learning|Visual Representations in Science Learning (Davenport, Klahr, & Koedinger)]], in which researchers are exploring whether coordination between verbal and visual representations can help students refine initially shallow understandings into meaningful chemical concepts.<br />
<br />
==== Annotated Bibliography ====<br />
*Presentation to the PSLC Advisory Board, Fall 2006. [http://www.learnlab.org/uploads/mypslc/talks/butchercontiguity_ab2006_final_distribute.ppt Link to Powerpoint slides]<br />
*Butcher, K., & Aleven, V. (2007). Integrating visual and verbal knowledge during classroom learning with computer tutors. In D.S. McNamara & J.G. Trafton (Eds.), Proceedings of the 29th Annual Cognitive Science Society (pp. 137-142). Austin, TX: Cognitive Science Society. [http://www.learnlab.org/uploads/mypslc/publications/op557-butcher.pdf PDF File]<br />
*Butcher, K., & Aleven, V. (2008). Diagram Interaction during Intelligent Tutoring in Geometry: Support for Knowledge Retention and Deep Understanding. In B. C. Love, K. McRae, & V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 1736-1741). Austin, TX: Cognitive Science Society.<br />
<br />
==== References ====<br />
*Mayer, R. E. (2001). Multimedia Learning. Cambridge, Cambridge University Press.<br />
*Olina, Z., Reiser, R., Huang, X., Lim, J., & Park, S. (2006). Problem format and presentation sequence: Effects on learning and mental effort among U.S. high school students Applied Cognitive Psychology, 20, 299-309.<br />
<br />
[[Category:Study]]<br />
[[Category:Data available in DataShop]]<br />
<br />
[http://editingwritingservices.org/article.php article writing services]</div>Petrachaney