https://learnlab.org/research/wiki/api.php?action=feedcontributions&user=Liuying%40pitt.edu&feedformat=atomLearnLab - User contributions [en]2024-03-29T01:53:24ZUser contributionsMediaWiki 1.31.12https://learnlab.org/wiki/index.php?title=File:Head1.jpg&diff=7192File:Head1.jpg2008-03-12T15:02:20Z<p>Liuying@pitt.edu: </p>
<hr />
<div></div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Learning_Chinese_pronunciation_from_a_%C3%83%C2%A2%C3%A2%E2%80%9A%C2%AC%C3%85%E2%80%9Ctalking_head%C3%83%C2%A2%C3%A2%E2%80%9A%C2%AC%C3%82%C2%9D&diff=7191Learning Chinese pronunciation from a “talking headâ€Â2008-03-12T15:02:05Z<p>Liuying@pitt.edu: /* Findings */</p>
<hr />
<div>----<br />
Summary Table<br />
*Learning Chinese pronunciation from a “talking head”<br />
*Researchers: Ying Liu, Dominic Massaro, Susan Dunlap, Suemei Wu, Trevor Chen, Derek Chan, Charles Perfetti<br />
*PIs: Ying Liu, Dominic Massaro, Charles Perfetti, <br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs:<br />
*Graduate Students: Trevor Chen<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 40<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
In this study, we compared the learning of Chinese pronunciation under three different online instruction methods: audio only, human “talking head”, and computer generated synthetic “talking head”. The learning took place through a web site developed specifically for students learning Chinese in the Chinese Learnlab[http://learnlab.org/learnlabs/chinese/]. Under both “talking head” conditions, the face of the speaker occupied 2/3 of the video screen. When student viewed the human “talking head”, major information came from the shape of the mouth and lip movement accompanied by audio sound. Whereas the synthetic “talking head” is transparent to reveal the internal articulators, which was accompanied by a slower than normal sound to match the “talking head” articulation. We predict [[multimedia sources]] can lead to [[robust learning]] when the [[cognitive load]] is within limit.<br />
<br />
== Glossary ==<br />
<br />
Visual; audio; video<br />
<br />
== Research question ==<br />
<br />
Does visual input of a “talking head” enhance the learning of Chinese pronunciation? <br />
<br />
== Background ==<br />
<br />
Multimedia technology has been used in second language learning for many years. The current available technology makes it possible to deliver not only text information, but also auditory and visual information through the Internet. It has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). For example, research in English showed that visual information on the vertical separation between the lips and the degree of lip spreading/rounding help the understanding of spoken language (Massaro and Cohen, 1990; Cohen and Massaro, 1994). So, does a visually presented “talking head” contains both auditory and visual information help Chinese character learning? Especially the robust learning of Chinese pronunciations which contain difficult consonants and tones? The method has not been tested by any well-designed experiment yet. However, based on a study in which we used a real person “talking head” to train true beginners on Chinese character, we believe it is a very effective learning method. Dr. Massaro’s research group is currently working on developing a animated 3D Chinese virtual speaker: Bao (Massaro, Ouni, Cohen, and Clark, In press). They found both the animated video (Baldi) and natural video were perceived better than voice only condition in a perceptual recognition experiment. The above two video conditions performed equally. We will do a comparison study between audio only, Bao and real person talking heads on our Chinese learners. <br />
<br />
<br />
<br />
== Dependent variables == which are observable and typically measure competence, motivation, interaction, meta-learning, or some other pedagogically desirable outcome; <br />
<br />
Accuracy of pronouncing Chinese syllables (initials and finals).<br />
<br />
== Independent variables ==<br />
<br />
Three learning methods: audio only (control), human “talking head”, computer synthesized “talking head”.<br />
Different Chinese syllables listed in Table 1.<br />
Table 1. The syllables are all tone-1 Mandarin words (pin-yin) except those with the tones indicated in parentheses. UC = unique consonants; NUC = Non-unique consonants; NUS = Non-unique syllables; US = unique syllables; UV = unique vowels<br />
*UC NUC NUS US UV<br />
*ji Pi bao Ju Ge<br />
*qie Nie dao qu He<br />
*xian Tian gao xu Ke<br />
*zhen Fen e(2)<br />
*chuan kuan U(3)<br />
*sha La <br />
<br />
== Hypothesis ==<br />
<br />
We predict that visual input can provide more robust learning of pronouncing Chinese sound when using appropriately.<br />
<br />
== Findings ==<br />
<br />
The analysis on finals showed significant condition effect (χ2 (2)=7.39, p=0.025). Further pairwise comparisons showed that synthetic Baldi is significantly better than audio only condition (χ2(1)=7.36, p=0.0067). Least square mean were listed in Table 2.<br />
<br />
Figure. Least square mean percentages of improvement based on logistic model<br />
<br />
[[Image:head1.jpg]]<br />
<br />
== Explanation ==<br />
<br />
It is difficult to learn to speak a language by just listening to it, especially for a second language learner at beginner’s level. Visual cues provide extra information for reach the goal of speaking “natively”. Imitation is best achieved by understanding how the organs produce the sound. Current findings support that Bao (Chinese Baldi) has significant advantage in teaching Chinese vowel pronunciation than audio alone. The human face falls between the above two methods, because it provides some useful facial information but the internal organs are not transparent. We conclude that visual speech provides significant benefit for learners to improve their pronunciation.<br />
<br />
As a node under [[coordinative learning]] cluster, [[coordination]] of visual and audio inputs is the cognitive process leads to more [[robust learning]]. <br />
<br />
== Descendents ==<br />
None.<br />
<br />
== Further information ==<br />
Massaro, D. W., Liu, Y., Chen, T. H., & Perfetti, C. A. (2006). A Multilingual Embodied Conversational Agent for Tutoring Speech and Language Learning. Proceedings of the Ninth International Conference on Spoken Language Processing (Interspeech 2006 - ICSLP, September, Pittsburgh, PA), 825-828.Universität Bonn, Bonn, Germany.</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Learning_a_tonal_language:_Chinese&diff=7186Learning a tonal language: Chinese2008-03-12T01:48:52Z<p>Liuying@pitt.edu: /* Findings */</p>
<hr />
<div>----<br />
Summary table<br />
*Node Title: Learning a tonal language: Chinese<br />
*Researchers: Min Wang, Ying Liu, Suemei Wu, Derek Chan, Charles Perfetti<br />
*PIs: Min Wang, Charles Perfetti, Ying Liu<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Baoguo Chen<br />
*Graduate Students: Derek Chan, Brian Brubaker<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 150<br />
*Total Participant Hours for the study: 300<br />
*Data in the Data Shop: Yes<br />
----<br />
== Abstract ==<br />
*The tonal feature of Chinese language poses a particular challenge for a beginning learner of Chinese as a second language. In this project, we test learning hypotheses based on the assumption that attending to the critical [[features]] of the tonal pitch contour facilitates learning.<br />
*This study consists of experiments on both tone perception and production tasks. In tone perception task, three training conditions were tested: 1) visual pitch contours that depict the acoustic information of the tones, together with Pinyin spelling of the spoken syllable; 2) numerical numbers that represent the tones in traditional classroom instruction, together with Pinyin spelling of the spoken syllable; 3) visual pitch contours, without Pinyin spelling. By comparing these three training conditions, we will test two hypotheses: 1) using visual information of the tone waveform facilitates students’ perception of auditory tones; 2) providing Pinyin spelling allows the students to focus on the tone, therefore yields more [[robust learning]], which was measured by [[transfer]] and [[long-term retention]] tasks.<br />
*In tone production task, we used a frequency analyzer to extract the fundamental frequency of student’s sound production. The pitch contour of production will be displayed to the student in real time during their production practice. By comparing the group which receives this individualized pitch contour with a group which does not, we predict the former will show more [[robust learning]] on tone production, which was shown as pronunciation [[refinement]]. <br />
<br />
<br />
== Glossary ==<br />
Tone; pitch contour; visual feedback<br />
<br />
<br />
== Research question ==<br />
How to optimally use crucial tonal information to facilitate Chinese tone learning.<br />
<br />
== Background ==<br />
*The basic speech unit of Chinese is the syllable, and each syllable is divided into two parts: onset and rime. The onset of a Chinese syllable is always a single consonant. In most syllables the rime segment consists of mainly vowels. As a result, Chinese has a much smaller number of syllables than does spoken English (Hanely, Tzeng, & Huang, 1999). This leads to a large number of homophones in Chinese. However, because of the existence of tone in Chinese syllables, the number of homophones is reduced. There are about 1,300 tone syllables in spoken Chinese (Taylor & Taylor, 1995). <br />
*The tonal feature of the Chinese language forms a sharp contrast to many alphabetic languages such as English. American college students learning Chinese language may encounter great difficulty in acquiring the tone skill. Wang, Perfetti, and Liu (2003) used a onset-rime-tone matching task to test beginning Chinese learners’ phonological processing skills. We found that these beginning Chinese learners showed poorer performance in tone matching compared to their performance in onset and rime matching.<br />
*There is very limited research on studying tone learning. Three-year-old Chinese native-speaking children have been shown to be able to detect when rime and tone are combined but they cannot detect rime and tone separately. Five-year-olds, on the other hand, can independently process rime and tone (Ho & Bryant, 1997). Wang, Spence, Jongman and Sereno (1999) trained American listeners to perceive Chinese tones. They found a significant increase of identification accuracy from pretest to posttest. <br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of making tone selection and decision tasks, and evaluations of productions.<br />
<br />
== Independent variables ==<br />
<br />
''Tone perception study'': 1) visual pitch contours that depict the acoustic information of the tones, together with Pinyin spelling of the spoken syllable; 2) numerical numbers that represent the tones in traditional classroom instruction, together with Pinyin spelling of the spoken syllable; 3) visual pitch contours, without Pinyin spelling.<br />
<br />
''Tone production study'': 1) visual feedback based on tone analyzer of student’s pronunciation; 2) no visual feedback.<br />
<br />
== Hypothesis ==<br />
Having student focusing on tonal feature by providing visual pitch contour plus segmental information facilitates tonal perception and production.<br />
<br />
== Findings ==<br />
Current results from two terms of tone perception experiment showed providing segmental information (Pinyin) provides a better learning curve. The learning curve of term 1 (lesson 1 to 8) showed Pinyin+contour and Pinyin+number conditions are better than contour only condition. The following Figure of fitted learning curve showed that the former two conditions have more negative slope (faster learning rate).<br />
*[[Image:tone1.jpg]]<br />
<br />
== Explanation ==<br />
Learning Chinese tone was facilitated by having students [[focusing]] on the tonal features. Proving segmental information (Pinyin) before learning to a syllable sound provides more [[assistance]] to beginners, which makes it easier for them to pay more attention to the tone. <br />
Furthremore, the visual pitch contour and auditory tone are [[complementary]] information for learning tones. The mental representation of tones are more complete when visual pintch contour are provided together with Pinyin.<br />
<br />
== Descendents ==<br />
Tone perception (the present page)<br />
Tone production (under construction)<br />
<br />
== Further information ==<br />
www.pitt.edu/~liuying/pslc_tone.doc</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=File:Tone1.jpg&diff=7185File:Tone1.jpg2008-03-12T01:48:32Z<p>Liuying@pitt.edu: </p>
<hr />
<div></div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Learning_a_tonal_language:_Chinese&diff=7184Learning a tonal language: Chinese2008-03-12T01:48:10Z<p>Liuying@pitt.edu: /* Findings */</p>
<hr />
<div>----<br />
Summary table<br />
*Node Title: Learning a tonal language: Chinese<br />
*Researchers: Min Wang, Ying Liu, Suemei Wu, Derek Chan, Charles Perfetti<br />
*PIs: Min Wang, Charles Perfetti, Ying Liu<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Baoguo Chen<br />
*Graduate Students: Derek Chan, Brian Brubaker<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 150<br />
*Total Participant Hours for the study: 300<br />
*Data in the Data Shop: Yes<br />
----<br />
== Abstract ==<br />
*The tonal feature of Chinese language poses a particular challenge for a beginning learner of Chinese as a second language. In this project, we test learning hypotheses based on the assumption that attending to the critical [[features]] of the tonal pitch contour facilitates learning.<br />
*This study consists of experiments on both tone perception and production tasks. In tone perception task, three training conditions were tested: 1) visual pitch contours that depict the acoustic information of the tones, together with Pinyin spelling of the spoken syllable; 2) numerical numbers that represent the tones in traditional classroom instruction, together with Pinyin spelling of the spoken syllable; 3) visual pitch contours, without Pinyin spelling. By comparing these three training conditions, we will test two hypotheses: 1) using visual information of the tone waveform facilitates students’ perception of auditory tones; 2) providing Pinyin spelling allows the students to focus on the tone, therefore yields more [[robust learning]], which was measured by [[transfer]] and [[long-term retention]] tasks.<br />
*In tone production task, we used a frequency analyzer to extract the fundamental frequency of student’s sound production. The pitch contour of production will be displayed to the student in real time during their production practice. By comparing the group which receives this individualized pitch contour with a group which does not, we predict the former will show more [[robust learning]] on tone production, which was shown as pronunciation [[refinement]]. <br />
<br />
<br />
== Glossary ==<br />
Tone; pitch contour; visual feedback<br />
<br />
<br />
== Research question ==<br />
How to optimally use crucial tonal information to facilitate Chinese tone learning.<br />
<br />
== Background ==<br />
*The basic speech unit of Chinese is the syllable, and each syllable is divided into two parts: onset and rime. The onset of a Chinese syllable is always a single consonant. In most syllables the rime segment consists of mainly vowels. As a result, Chinese has a much smaller number of syllables than does spoken English (Hanely, Tzeng, & Huang, 1999). This leads to a large number of homophones in Chinese. However, because of the existence of tone in Chinese syllables, the number of homophones is reduced. There are about 1,300 tone syllables in spoken Chinese (Taylor & Taylor, 1995). <br />
*The tonal feature of the Chinese language forms a sharp contrast to many alphabetic languages such as English. American college students learning Chinese language may encounter great difficulty in acquiring the tone skill. Wang, Perfetti, and Liu (2003) used a onset-rime-tone matching task to test beginning Chinese learners’ phonological processing skills. We found that these beginning Chinese learners showed poorer performance in tone matching compared to their performance in onset and rime matching.<br />
*There is very limited research on studying tone learning. Three-year-old Chinese native-speaking children have been shown to be able to detect when rime and tone are combined but they cannot detect rime and tone separately. Five-year-olds, on the other hand, can independently process rime and tone (Ho & Bryant, 1997). Wang, Spence, Jongman and Sereno (1999) trained American listeners to perceive Chinese tones. They found a significant increase of identification accuracy from pretest to posttest. <br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of making tone selection and decision tasks, and evaluations of productions.<br />
<br />
== Independent variables ==<br />
<br />
''Tone perception study'': 1) visual pitch contours that depict the acoustic information of the tones, together with Pinyin spelling of the spoken syllable; 2) numerical numbers that represent the tones in traditional classroom instruction, together with Pinyin spelling of the spoken syllable; 3) visual pitch contours, without Pinyin spelling.<br />
<br />
''Tone production study'': 1) visual feedback based on tone analyzer of student’s pronunciation; 2) no visual feedback.<br />
<br />
== Hypothesis ==<br />
Having student focusing on tonal feature by providing visual pitch contour plus segmental information facilitates tonal perception and production.<br />
<br />
== Findings ==<br />
Current results from two terms of tone perception experiment showed providing segmental information (Pinyin) provides a better learning curve. The learning curve of term 1 (lesson 1 to 8) showed Pinyin+contour and Pinyin+number conditions are better than contour only condition. The following Figure of fitted learning curve showed that the former two conditions have more negative slope (faster learning rate).<br />
[[Image:tone1.jpg]]<br />
<br />
== Explanation ==<br />
Learning Chinese tone was facilitated by having students [[focusing]] on the tonal features. Proving segmental information (Pinyin) before learning to a syllable sound provides more [[assistance]] to beginners, which makes it easier for them to pay more attention to the tone. <br />
Furthremore, the visual pitch contour and auditory tone are [[complementary]] information for learning tones. The mental representation of tones are more complete when visual pintch contour are provided together with Pinyin.<br />
<br />
== Descendents ==<br />
Tone perception (the present page)<br />
Tone production (under construction)<br />
<br />
== Further information ==<br />
www.pitt.edu/~liuying/pslc_tone.doc</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Coordinative_Learning&diff=7183Coordinative Learning2008-03-12T01:04:50Z<p>Liuying@pitt.edu: /* Visualizations and Multi-modal sources */</p>
<hr />
<div>= The PSLC Coordinative Learning cluster =<br />
<br />
== Abstract ==<br />
The studies in the Coordinative Learning cluster tend to focus on varying ''a)'' the types of information available to learning or ''b)'' the instructional methods that they employ. In particular, the studies focus on the impact of having learners coordinate two or more types. Given that the student has multiple [[sources]]/methods available, two factors that might impact learning are:<br />
<br />
*What is the relationship between the content in the two sources or the content generated by the two methods? Our hypothesis is that the two sources or methods facilitate [[robust learning]] when a [[knowledge component]] is difficult to understand or absent in one and is present or easier to understand in the other.<br />
*When and how does the student coordinate between the two sources or methods? Our hypothesis is that students should be encouraged to compare the two, perhaps by putting them close together in space or time. <br />
<br />
At the micro-level, the overall hypothesis is that robust learning occurs when the [[learning event space]] has target paths whose [[sense making]] difficulties complement each other (as expressed in the first bullet above) and the students make path choices that take advantage of these [[complementary]] paths (as in the second bullet, above). This hypothesis is just a specialization of the [[Root_node|general PSLC hypothesis]] to this cluster.<br />
<br />
The matrix below shows how studies in this cluster (pages for these studies can be found Descendants section below) either test or make use of various [[instructional method|instructional methods]] or treatments. When a study tests an instructional method a "v" is one shown in the appropriate cell to indicate that that method is '''varied''' in the study, that is, the [[robust learning]] gains of an experimental condition that receives this method are contrasted with those of an otherwise equivalent control condition that does not receive this method. In this case (when a "v" is present), the study tests the [[InstructionalPrinciples|instructional principle]] indicated in the column. When a cell contains a "b" it indicates that '''both''' the experimental and control conditions use this instructional method (or employ this instructional principle). In this case, the study is not a true experimental test of the principle.<br />
<br />
<br><center>[[Image:cl-theory.jpg]]</center><br />
<br />
== Glossary ==<br />
[[:Category:Coordinative Learning|Coordinative Learning]] glossary.<br />
<br />
*'''[[Co-training]]'''<br />
*'''[[Complementary]]'''<br />
*'''[[Conceptual tasks]]''' <br />
*'''[[Contiguity]]'''<br />
*'''[[Coordination]]'''<br />
*'''[[Ecological control group]]'''<br />
*'''[[External representations]]'''<br />
*'''[[Input sources ]]'''<br />
*'''[[Instructional method]]'''<br />
*'''[[Multimedia sources]]'''<br />
*'''[[Procedural tasks]]''' <br />
*'''[[Self-explanation]]'''<br />
*'''[[Self-supervised learning]]'''<br />
*'''[[Sources]]'''<br />
*'''[[Strategies]]'''<br />
*'''[[Unlabeled examples]]'''<br />
<br />
== Research questions ==<br />
<br />
When and how does coordinating multiple sources of information or lines of reasoning increase robust learning?<br />
<br />
Two sub-groups of coordinative learning studies are exploring these more specific questions:<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
<br />
When does adding visualizations or other multi-modal input enhance robust learning and how do we best support students in coordinating these sources?<br />
<br />
=== Examples and Explanations ===<br />
<br />
When and how should example study be combined and coordinated with problem solving to increase robust learning? When and how should explicit explanations be added or requested of students before, during, or after example study and problem solving practice?<br />
<br />
== Independent variables ==<br />
<br />
*Content of the sources (e.g., pictures, diagrams, written text, audio, animation) or the encouraged lines of reasoning (e.g., example study, self-explanation, conceptual task, procedural task) and combinations<br />
<br />
*Instructional activities designed to engage students in [[coordination]] (e.g., conceptual vs. [[procedural]] exercises, contiguous presentation of sources, [[self-explanation]])<br />
<br />
See [[:Category:Independent Variables]]<br />
<br />
== Dependent variables ==<br />
[[Normal post-test]] and measures of [[robust learning]].<br />
<br />
== Hypotheses ==<br />
When students are given sources/methods whose [[sense making]] difficulties are complementary and they are engaged in coordinating the sources/methods, then their learning will be more robust than it would otherwise be.<br />
<br />
== Explanation ==<br />
<br />
There are both [[sense making]] and [[foundational skill building]] explanations. From the sense making perspective, if the sources/methods yield complementary content and the student is engaged in coordinating them, then the student is more likely to successfully understand the instruction because if a student fails to understand one of the sources/methods, he can use the second to make sense of the first. From a foundational skill building perspective, attending to both sources/methods simultaneously associates [[features]] from both with the learned knowledge components, thus potentially increasing [[feature validity]] and hence [[robust learning]].<br />
<br />
== Descendents ==<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
*[[Contiguous Representations for Robust Learning (Aleven & Butcher)]]<br />
**[[Static vs. Animated Visual Representations for Science Learning (Kaye, Small, Butcher, & Chi)]]<br />
*[[Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)]]<br />
**[[Training Geometry Concepts with Visual and Verbal Sources (Burchfield, Aleven, & Butcher)]]<br />
*[[Visual Representations in Science Learning | Visual Representations in Science Learning (Davenport, Klahr & Koedinger)]]<br />
* Cotraining in language learning<br />
**[[Co-training of Chinese characters| Co-training of Chinese characters (Liu, Perfetti, Dunlap, Zi, Mitchell)]]<br />
**[[Co-training and pairing| The pairing effect in Chinese cotraining (Liu, Perfetti, Dunlap, Wu, Mitchell)]]<br />
*[[Learning Chinese pronunciation from a “talking head”| Learning Chinese pronunciation from a “talking head” (Liu, Massaro, Dunlap, Wu, Chen,Chan, Perfetti)]] [Was in Refinement and Fluency]<br />
*[[Visual Feature Focus in Geometry: Instructional Support for Visual Coordination During Learning (Butcher & Aleven)]]<br />
*[[Learning About Emergence and Heat Transfer (Chi)]]<br />
<br />
=== Examples and Explanations ===<br />
*[[Booth | Improving skill at solving equations through better encoding of algebraic concepts (Booth, Siegler, Koedinger & Rittle-Johnson)]]<br />
*[[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | Studying the Learning Effect of Personalization and Worked Examples in the Solving of Stoichiometry Problems (McLaren, Koedinger & Yaron)]]<br />
*[[Note-Taking_Technologies | Note-taking Project Page (Bauer & Koedinger)]]<br />
**[[Note-Taking: Restriction and Selection]] (completed)<br />
**[[Note-Taking: Coordination]] (planned)<br />
*[[REAP_main | The REAP Project: Implicit and explicit instruction on word meanings (Juffs & Eskenazi)]]<br />
*[[Help_Lite (Aleven, Roll)|Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content (Aleven & Roll)]]<br />
*[[Handwriting Algebra Tutor]] (Anthony, Yang & Koedinger)<br />
**[[Lab study proof-of-concept for handwriting vs typing input for learning algebra equation-solving]] (completed)<br />
**[[Effect of adding simple worked examples to problem-solving in algebra learning]] (completed, analysis in progress)<br />
**[[In vivo comparison of Cognitive Tutor Algebra using handwriting vs typing input]] (in progress)<br />
*[[Bridging_Principles_and_Examples_through_Analogy_and_Explanation | Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn)]]<br />
*[[Does learning from worked-out examples improve tutored problem solving? | Does learning from worked-out examples improve tutored problem solving? (Renkl, Aleven & Salden)]] [Also in Interactive Communication]<br />
*[[Ringenberg_Examples-as-Help | Scaffolding Problem Solving with Embedded Example to Promote Deep Learning (Ringenberg & VanLehn)]]<br />
*[[Roll_IPL | Invention as Preparation for Learning (Roll, Aleven, Koedinger & Schwartz)]]<br />
*[[Baker_Choices_in_LE_Space | How Content and Interface Features Influence Student Choices Within the Learning Space (Baker, Corbett, Koedinger, & Rodrigo)]]<br />
*[[Mayer_and_McLaren_-_Social_Intelligence_And_Computer_Tutors | Building Social Intelligence into Computer-Based Tutors (Mayer & McLaren)]]<br />
<br />
== Annotated Bibliography ==<br />
Much research in human and machine learning research has advocated various kinds of “multiples” to assist learning: <br />
* multiple data sources (e.g., human learning (HL): Mayer, 2001; machine learning (ML): Blum & Mitchell, 1998; Collins & Singer, 1999). <br />
* multiple representations (e.g., HL: Ainsworth & Van Labeke, 2004; ML: Liere & Tadepalli, 1997), <br />
* multiple strategies (e.g., HL: Klahr & Siegler, 1978; ML: Michalski & Tecucci 1997; Saitta, Botta, & Neri, 1993); <br />
* multiple learning tasks (e.g., HL: Holland, Holyoak, Nisbett, & Thagard, 1986; ML: Caruana, 1997; Case, Jain, Ott, Sharma, & Stephan, 1998); <br />
<br />
Experiments in human learning have demonstrated, for instance, that instruction that combines rules or principles and [[example]]s yields better results than either alone (Holland, Holyoak, Nisbett, & Thagard, 1986) or, for instance, iterative instruction of both [[Procedural tasks|procedures]] and [[Conceptual tasks|concepts]] better learning (Rittle-Johnson & Koedinger, 2002; Rittle-Johnson, Siegler, & Alibali, 2001). <br />
<br />
Experiments in machine learning have demonstrated how more robust, generalizable learning can be achieved by training a single learner on ''multiple'' related tasks (Caruana 1997) or by training ''multiple'' learning systems on the same task (Blum & Mitchell 1998; Collins & Singer 1999; Muslea, Minton, & Knoblock, 2002). Blum and Mitchell (1998) provide both empirical results and a proof of the circumstances under which strategy combinations enhance learning. In particular, the [[co-training]] approach for combining multiple learning strategies yields better learning to the extent that the learning strategies produce “uncorrelated errors” – when one is wrong the other is often right. As an example of PSLC work, Donmez et al. (2005) demonstrate, using a multi-dimensional collaborative process analysis, that regularities across ''multiple'' codings of the same data can be exploited for the purpose of improving text classification accuracy for difficult codings.<br />
<br />
An ambitious goal of PSLC is provide a rigorous causal theory of human learning results at the level of precision of machine learning research. <br />
<br />
* Ainsworth, S., Bibby, P., & Wood, D. (2002). Examining the effects of different multiple representational systems in learning primary mathematics. The Journal of the Learning Sciences, 11(1), 25–61.<br />
* Ainsworth, S.E. & Van Labeke (2004) Multiple forms of dynamic representation. Learning and Instruction, 14(3), 241-255. <br />
* Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 92–100). New York: ACM Press. Available: citeseer.nj.nec.com/blum98combining.html<br />
* Caruana, R. (1997). Multitask learning. Machine Learning 28(1), 41-75. Available: citeseer.nj.nec.com/caruana97multitask.html.<br />
* Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (1998). Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 44-55). New York: ACM Press.<br />
* Collins, M., & Singer, Y. (1999). Unsupervised models for named entity classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (pp. 189–196).<br />
* Donmez, P., Rose, C. P., Stegmann, K., Weinberger, A., and Fischer, F. (2005). Supporting CSCL with Automatic Corpus Analysis Technology, to appear in the Proceedings of Computer Supported Collaborative Learning.<br />
* Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press.<br />
* Klahr D., and Siegler R.S. (1978). The Representation of Children's Knowledge. In H.W. Reese and L.P. Lipsitt (Eds.), Advances in Child Development and Behavior, Academic Press, New York, NY, pp. 61-116.<br />
* Liere, R., & Tadepalli, P. (1997). Active learning with committees for text categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence (pp. 591—596). Menlo Park, CA: AAAI Press.<br />
* Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.<br />
* Michalski, R., & Tecuci, G. (Eds.) (1997). Machine learning: A multi-strategy approach. Morgan Kaufmann.<br />
* Muslea, I., Minton, S., & Knoblock, C. (2002). Active + semi-supervised learning = robust multi-view learning. In Proceedings of ICML-2002. Sydney, Australia.<br />
* Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–262.<br />
* Rittle-Johnson, B., & Koedinger, K. R. (2002). Comparing instructional strategies for integrating conceptual and procedural knowledge. Paper presented at the Psychology of Mathematics Education, National, Athens, GA.<br />
* Saitta, L., Botta, M., & Neri, F. (1993). Multi-strategy learning and theory revision. Machine Learning, 11(2/3), 153–172.<br />
[[Category:Cluster]]</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7182Co-training of Chinese characters2008-03-12T01:03:30Z<p>Liuying@pitt.edu: </p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses: LRDC, pull out study<br />
*Number of Students: 44<br />
*Total Participant Hours for the study: 44<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
The second part was posttest, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=File:Cotraining3.jpg&diff=7181File:Cotraining3.jpg2008-03-12T01:01:28Z<p>Liuying@pitt.edu: </p>
<hr />
<div></div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_and_pairing&diff=7180Co-training and pairing2008-03-12T01:01:12Z<p>Liuying@pitt.edu: </p>
<hr />
<div>Under developing.<br />
----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 2)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Suemei Wu, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Graduate Students: Derek Chan, Susan Dunlap<br />
*Study Start Date Sep 1, 2006<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study continued to explore how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. four training methods were applied in a two by two crossed design. The two factors are labeled pairing and unlabeled pairing. Every subject received all four methods in a counter balanced order.<br />
The second part was posttest, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study as a follow-up study to understand the pairs effect. We have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. <br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning more than the unpaired condition, but the benefit is more obvious when the trials are unlabeled.<br />
<br />
<br />
== Findings ==<br />
<br />
[[Image:cotraining3.jpg]]<br />
<br />
As shown in above figure, the two paired unlabeled conditions had higher accuracies than the two unpaired unlabeled conditions. However, it did not reach statistical significance: the labeled pairing effect was not significant (F(1,6)=0.176, p=0.689), the unlabeled pairing effect was not significant (F(1,6)=2.077, p=0.2), and their interaction was not significant either (F(1,6)=1, p=0.356).<br />
<br />
== Explanation ==<br />
<br />
Even though the pairing effect was not statistical significant, the effect showed the pattern as the hypothesis predicted. Pairing of visual and auditory modality is more helpful than any single modality because the visual-auditory connection can be build during the learning process. However, when a label (English translation) is provided, modality pairing does not lead to any benefit because there has already been visual-lexical and auditory-lexical connections for a human learner to process. It might be better for a human learner to focus on one connection at a time due to the working memory load.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_and_pairing&diff=7179Co-training and pairing2008-03-11T15:48:36Z<p>Liuying@pitt.edu: </p>
<hr />
<div>Under developing.<br />
----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 2)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Suemei Wu, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Graduate Students: Derek Chan, Susan Dunlap<br />
*Study Start Date Sep 1, 2006<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study continued to explore how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. four training methods were applied in a two by two crossed design. The two factors are labeled pairing and unlabeled pairing. Every subject received all four methods in a counter balanced order.<br />
The second part was posttest, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7178Co-training of Chinese characters2008-03-11T15:47:33Z<p>Liuying@pitt.edu: /* Abstract */</p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
The second part was posttest, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_and_pairing&diff=7177Co-training and pairing2008-03-11T15:46:58Z<p>Liuying@pitt.edu: New page: ---- '''Summary Table''' *Node Title: Learning to read Chinese: Co-training in human (Study 2) *Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Suemei Wu, Tom Mitchell *PIs: Yin...</p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 2)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Suemei Wu, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Graduate Students: Derek Chan, Susan Dunlap<br />
*Study Start Date Sep 1, 2006<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study continued to explore how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. four training methods were applied in a two by two crossed design. The two factors are labeled pairing and unlabeled pairing. Every subject received all four methods in a counter balanced order.<br />
The second part was posttest, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Coordinative_Learning&diff=7176Coordinative Learning2008-03-11T14:41:59Z<p>Liuying@pitt.edu: /* Visualizations and Multi-modal sources */</p>
<hr />
<div>= The PSLC Coordinative Learning cluster =<br />
<br />
== Abstract ==<br />
The studies in the Coordinative Learning cluster tend to focus on varying ''a)'' the types of information available to learning or ''b)'' the instructional methods that they employ. In particular, the studies focus on the impact of having learners coordinate two or more types. Given that the student has multiple [[sources]]/methods available, two factors that might impact learning are:<br />
<br />
*What is the relationship between the content in the two sources or the content generated by the two methods? Our hypothesis is that the two sources or methods facilitate [[robust learning]] when a [[knowledge component]] is difficult to understand or absent in one and is present or easier to understand in the other.<br />
*When and how does the student coordinate between the two sources or methods? Our hypothesis is that students should be encouraged to compare the two, perhaps by putting them close together in space or time. <br />
<br />
At the micro-level, the overall hypothesis is that robust learning occurs when the [[learning event space]] has target paths whose [[sense making]] difficulties complement each other (as expressed in the first bullet above) and the students make path choices that take advantage of these [[complementary]] paths (as in the second bullet, above). This hypothesis is just a specialization of the [[Root_node|general PSLC hypothesis]] to this cluster.<br />
<br />
The matrix below shows how studies in this cluster (pages for these studies can be found Descendants section below) either test or make use of various [[instructional method|instructional methods]] or treatments. When a study tests an instructional method a "v" is one shown in the appropriate cell to indicate that that method is '''varied''' in the study, that is, the [[robust learning]] gains of an experimental condition that receives this method are contrasted with those of an otherwise equivalent control condition that does not receive this method. In this case (when a "v" is present), the study tests the [[InstructionalPrinciples|instructional principle]] indicated in the column. When a cell contains a "b" it indicates that '''both''' the experimental and control conditions use this instructional method (or employ this instructional principle). In this case, the study is not a true experimental test of the principle.<br />
<br />
<br><center>[[Image:cl-theory.jpg]]</center><br />
<br />
== Glossary ==<br />
[[:Category:Coordinative Learning|Coordinative Learning]] glossary.<br />
<br />
*'''[[Co-training]]'''<br />
*'''[[Complementary]]'''<br />
*'''[[Conceptual tasks]]''' <br />
*'''[[Contiguity]]'''<br />
*'''[[Coordination]]'''<br />
*'''[[Ecological control group]]'''<br />
*'''[[External representations]]'''<br />
*'''[[Input sources ]]'''<br />
*'''[[Instructional method]]'''<br />
*'''[[Multimedia sources]]'''<br />
*'''[[Procedural tasks]]''' <br />
*'''[[Self-explanation]]'''<br />
*'''[[Self-supervised learning]]'''<br />
*'''[[Sources]]'''<br />
*'''[[Strategies]]'''<br />
*'''[[Unlabeled examples]]'''<br />
<br />
== Research questions ==<br />
<br />
When and how does coordinating multiple sources of information or lines of reasoning increase robust learning?<br />
<br />
Two sub-groups of coordinative learning studies are exploring these more specific questions:<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
<br />
When does adding visualizations or other multi-modal input enhance robust learning and how do we best support students in coordinating these sources?<br />
<br />
=== Examples and Explanations ===<br />
<br />
When and how should example study be combined and coordinated with problem solving to increase robust learning? When and how should explicit explanations be added or requested of students before, during, or after example study and problem solving practice?<br />
<br />
== Independent variables ==<br />
<br />
*Content of the sources (e.g., pictures, diagrams, written text, audio, animation) or the encouraged lines of reasoning (e.g., example study, self-explanation, conceptual task, procedural task) and combinations<br />
<br />
*Instructional activities designed to engage students in [[coordination]] (e.g., conceptual vs. [[procedural]] exercises, contiguous presentation of sources, [[self-explanation]])<br />
<br />
See [[:Category:Independent Variables]]<br />
<br />
== Dependent variables ==<br />
[[Normal post-test]] and measures of [[robust learning]].<br />
<br />
== Hypotheses ==<br />
When students are given sources/methods whose [[sense making]] difficulties are complementary and they are engaged in coordinating the sources/methods, then their learning will be more robust than it would otherwise be.<br />
<br />
== Explanation ==<br />
<br />
There are both [[sense making]] and [[foundational skill building]] explanations. From the sense making perspective, if the sources/methods yield complementary content and the student is engaged in coordinating them, then the student is more likely to successfully understand the instruction because if a student fails to understand one of the sources/methods, he can use the second to make sense of the first. From a foundational skill building perspective, attending to both sources/methods simultaneously associates [[features]] from both with the learned knowledge components, thus potentially increasing [[feature validity]] and hence [[robust learning]].<br />
<br />
== Descendents ==<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
*[[Contiguous Representations for Robust Learning (Aleven & Butcher)]]<br />
**[[Static vs. Animated Visual Representations for Science Learning (Kaye, Small, Butcher, & Chi)]]<br />
*[[Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)]]<br />
**[[Training Geometry Concepts with Visual and Verbal Sources (Burchfield, Aleven, & Butcher)]]<br />
*[[Visual Representations in Science Learning | Visual Representations in Science Learning (Davenport, Klahr & Koedinger)]]<br />
*[[Co-training of Chinese characters| Co-training of Chinese characters (Liu, Perfetti, Dunlap, Zi, Mitchell)]]<br />
*[[Co-training and pairing| The pairing effect in Chinese cotraining (Liu, Perfetti, Dunlap, Wu, Mitchell)]]<br />
*[[Learning Chinese pronunciation from a “talking head”| Learning Chinese pronunciation from a “talking head” (Liu, Massaro, Dunlap, Wu, Chen,Chan, Perfetti)]] [Was in Refinement and Fluency]<br />
*[[Visual Feature Focus in Geometry: Instructional Support for Visual Coordination During Learning (Butcher & Aleven)]]<br />
*[[Learning About Emergence and Heat Transfer (Chi)]]<br />
<br />
=== Examples and Explanations ===<br />
*[[Booth | Improving skill at solving equations through better encoding of algebraic concepts (Booth, Siegler, Koedinger & Rittle-Johnson)]]<br />
*[[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | Studying the Learning Effect of Personalization and Worked Examples in the Solving of Stoichiometry Problems (McLaren, Koedinger & Yaron)]]<br />
*[[Note-Taking_Technologies | Note-taking Project Page (Bauer & Koedinger)]]<br />
**[[Note-Taking: Restriction and Selection]] (completed)<br />
**[[Note-Taking: Coordination]] (planned)<br />
*[[REAP_main | The REAP Project: Implicit and explicit instruction on word meanings (Juffs & Eskenazi)]]<br />
*[[Help_Lite (Aleven, Roll)|Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content (Aleven & Roll)]]<br />
*[[Handwriting Algebra Tutor]] (Anthony, Yang & Koedinger)<br />
**[[Lab study proof-of-concept for handwriting vs typing input for learning algebra equation-solving]] (completed)<br />
**[[Effect of adding simple worked examples to problem-solving in algebra learning]] (completed, analysis in progress)<br />
**[[In vivo comparison of Cognitive Tutor Algebra using handwriting vs typing input]] (in progress)<br />
*[[Bridging_Principles_and_Examples_through_Analogy_and_Explanation | Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn)]]<br />
*[[Does learning from worked-out examples improve tutored problem solving? | Does learning from worked-out examples improve tutored problem solving? (Renkl, Aleven & Salden)]] [Also in Interactive Communication]<br />
*[[Ringenberg_Examples-as-Help | Scaffolding Problem Solving with Embedded Example to Promote Deep Learning (Ringenberg & VanLehn)]]<br />
*[[Roll_IPL | Invention as Preparation for Learning (Roll, Aleven, Koedinger & Schwartz)]]<br />
*[[Baker_Choices_in_LE_Space | How Content and Interface Features Influence Student Choices Within the Learning Space (Baker, Corbett, Koedinger, & Rodrigo)]]<br />
*[[Mayer_and_McLaren_-_Social_Intelligence_And_Computer_Tutors | Building Social Intelligence into Computer-Based Tutors (Mayer & McLaren)]]<br />
<br />
== Annotated Bibliography ==<br />
Much research in human and machine learning research has advocated various kinds of “multiples” to assist learning: <br />
* multiple data sources (e.g., human learning (HL): Mayer, 2001; machine learning (ML): Blum & Mitchell, 1998; Collins & Singer, 1999). <br />
* multiple representations (e.g., HL: Ainsworth & Van Labeke, 2004; ML: Liere & Tadepalli, 1997), <br />
* multiple strategies (e.g., HL: Klahr & Siegler, 1978; ML: Michalski & Tecucci 1997; Saitta, Botta, & Neri, 1993); <br />
* multiple learning tasks (e.g., HL: Holland, Holyoak, Nisbett, & Thagard, 1986; ML: Caruana, 1997; Case, Jain, Ott, Sharma, & Stephan, 1998); <br />
<br />
Experiments in human learning have demonstrated, for instance, that instruction that combines rules or principles and [[example]]s yields better results than either alone (Holland, Holyoak, Nisbett, & Thagard, 1986) or, for instance, iterative instruction of both [[Procedural tasks|procedures]] and [[Conceptual tasks|concepts]] better learning (Rittle-Johnson & Koedinger, 2002; Rittle-Johnson, Siegler, & Alibali, 2001). <br />
<br />
Experiments in machine learning have demonstrated how more robust, generalizable learning can be achieved by training a single learner on ''multiple'' related tasks (Caruana 1997) or by training ''multiple'' learning systems on the same task (Blum & Mitchell 1998; Collins & Singer 1999; Muslea, Minton, & Knoblock, 2002). Blum and Mitchell (1998) provide both empirical results and a proof of the circumstances under which strategy combinations enhance learning. In particular, the [[co-training]] approach for combining multiple learning strategies yields better learning to the extent that the learning strategies produce “uncorrelated errors” – when one is wrong the other is often right. As an example of PSLC work, Donmez et al. (2005) demonstrate, using a multi-dimensional collaborative process analysis, that regularities across ''multiple'' codings of the same data can be exploited for the purpose of improving text classification accuracy for difficult codings.<br />
<br />
An ambitious goal of PSLC is provide a rigorous causal theory of human learning results at the level of precision of machine learning research. <br />
<br />
* Ainsworth, S., Bibby, P., & Wood, D. (2002). Examining the effects of different multiple representational systems in learning primary mathematics. The Journal of the Learning Sciences, 11(1), 25–61.<br />
* Ainsworth, S.E. & Van Labeke (2004) Multiple forms of dynamic representation. Learning and Instruction, 14(3), 241-255. <br />
* Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 92–100). New York: ACM Press. Available: citeseer.nj.nec.com/blum98combining.html<br />
* Caruana, R. (1997). Multitask learning. Machine Learning 28(1), 41-75. Available: citeseer.nj.nec.com/caruana97multitask.html.<br />
* Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (1998). Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 44-55). New York: ACM Press.<br />
* Collins, M., & Singer, Y. (1999). Unsupervised models for named entity classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (pp. 189–196).<br />
* Donmez, P., Rose, C. P., Stegmann, K., Weinberger, A., and Fischer, F. (2005). Supporting CSCL with Automatic Corpus Analysis Technology, to appear in the Proceedings of Computer Supported Collaborative Learning.<br />
* Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press.<br />
* Klahr D., and Siegler R.S. (1978). The Representation of Children's Knowledge. In H.W. Reese and L.P. Lipsitt (Eds.), Advances in Child Development and Behavior, Academic Press, New York, NY, pp. 61-116.<br />
* Liere, R., & Tadepalli, P. (1997). Active learning with committees for text categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence (pp. 591—596). Menlo Park, CA: AAAI Press.<br />
* Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.<br />
* Michalski, R., & Tecuci, G. (Eds.) (1997). Machine learning: A multi-strategy approach. Morgan Kaufmann.<br />
* Muslea, I., Minton, S., & Knoblock, C. (2002). Active + semi-supervised learning = robust multi-view learning. In Proceedings of ICML-2002. Sydney, Australia.<br />
* Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–262.<br />
* Rittle-Johnson, B., & Koedinger, K. R. (2002). Comparing instructional strategies for integrating conceptual and procedural knowledge. Paper presented at the Psychology of Mathematics Education, National, Athens, GA.<br />
* Saitta, L., Botta, M., & Neri, F. (1993). Multi-strategy learning and theory revision. Machine Learning, 11(2/3), 153–172.<br />
[[Category:Cluster]]</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Coordinative_Learning&diff=7175Coordinative Learning2008-03-11T14:41:30Z<p>Liuying@pitt.edu: /* Visualizations and Multi-modal sources */</p>
<hr />
<div>= The PSLC Coordinative Learning cluster =<br />
<br />
== Abstract ==<br />
The studies in the Coordinative Learning cluster tend to focus on varying ''a)'' the types of information available to learning or ''b)'' the instructional methods that they employ. In particular, the studies focus on the impact of having learners coordinate two or more types. Given that the student has multiple [[sources]]/methods available, two factors that might impact learning are:<br />
<br />
*What is the relationship between the content in the two sources or the content generated by the two methods? Our hypothesis is that the two sources or methods facilitate [[robust learning]] when a [[knowledge component]] is difficult to understand or absent in one and is present or easier to understand in the other.<br />
*When and how does the student coordinate between the two sources or methods? Our hypothesis is that students should be encouraged to compare the two, perhaps by putting them close together in space or time. <br />
<br />
At the micro-level, the overall hypothesis is that robust learning occurs when the [[learning event space]] has target paths whose [[sense making]] difficulties complement each other (as expressed in the first bullet above) and the students make path choices that take advantage of these [[complementary]] paths (as in the second bullet, above). This hypothesis is just a specialization of the [[Root_node|general PSLC hypothesis]] to this cluster.<br />
<br />
The matrix below shows how studies in this cluster (pages for these studies can be found Descendants section below) either test or make use of various [[instructional method|instructional methods]] or treatments. When a study tests an instructional method a "v" is one shown in the appropriate cell to indicate that that method is '''varied''' in the study, that is, the [[robust learning]] gains of an experimental condition that receives this method are contrasted with those of an otherwise equivalent control condition that does not receive this method. In this case (when a "v" is present), the study tests the [[InstructionalPrinciples|instructional principle]] indicated in the column. When a cell contains a "b" it indicates that '''both''' the experimental and control conditions use this instructional method (or employ this instructional principle). In this case, the study is not a true experimental test of the principle.<br />
<br />
<br><center>[[Image:cl-theory.jpg]]</center><br />
<br />
== Glossary ==<br />
[[:Category:Coordinative Learning|Coordinative Learning]] glossary.<br />
<br />
*'''[[Co-training]]'''<br />
*'''[[Complementary]]'''<br />
*'''[[Conceptual tasks]]''' <br />
*'''[[Contiguity]]'''<br />
*'''[[Coordination]]'''<br />
*'''[[Ecological control group]]'''<br />
*'''[[External representations]]'''<br />
*'''[[Input sources ]]'''<br />
*'''[[Instructional method]]'''<br />
*'''[[Multimedia sources]]'''<br />
*'''[[Procedural tasks]]''' <br />
*'''[[Self-explanation]]'''<br />
*'''[[Self-supervised learning]]'''<br />
*'''[[Sources]]'''<br />
*'''[[Strategies]]'''<br />
*'''[[Unlabeled examples]]'''<br />
<br />
== Research questions ==<br />
<br />
When and how does coordinating multiple sources of information or lines of reasoning increase robust learning?<br />
<br />
Two sub-groups of coordinative learning studies are exploring these more specific questions:<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
<br />
When does adding visualizations or other multi-modal input enhance robust learning and how do we best support students in coordinating these sources?<br />
<br />
=== Examples and Explanations ===<br />
<br />
When and how should example study be combined and coordinated with problem solving to increase robust learning? When and how should explicit explanations be added or requested of students before, during, or after example study and problem solving practice?<br />
<br />
== Independent variables ==<br />
<br />
*Content of the sources (e.g., pictures, diagrams, written text, audio, animation) or the encouraged lines of reasoning (e.g., example study, self-explanation, conceptual task, procedural task) and combinations<br />
<br />
*Instructional activities designed to engage students in [[coordination]] (e.g., conceptual vs. [[procedural]] exercises, contiguous presentation of sources, [[self-explanation]])<br />
<br />
See [[:Category:Independent Variables]]<br />
<br />
== Dependent variables ==<br />
[[Normal post-test]] and measures of [[robust learning]].<br />
<br />
== Hypotheses ==<br />
When students are given sources/methods whose [[sense making]] difficulties are complementary and they are engaged in coordinating the sources/methods, then their learning will be more robust than it would otherwise be.<br />
<br />
== Explanation ==<br />
<br />
There are both [[sense making]] and [[foundational skill building]] explanations. From the sense making perspective, if the sources/methods yield complementary content and the student is engaged in coordinating them, then the student is more likely to successfully understand the instruction because if a student fails to understand one of the sources/methods, he can use the second to make sense of the first. From a foundational skill building perspective, attending to both sources/methods simultaneously associates [[features]] from both with the learned knowledge components, thus potentially increasing [[feature validity]] and hence [[robust learning]].<br />
<br />
== Descendents ==<br />
<br />
=== Visualizations and Multi-modal sources ===<br />
*[[Contiguous Representations for Robust Learning (Aleven & Butcher)]]<br />
**[[Static vs. Animated Visual Representations for Science Learning (Kaye, Small, Butcher, & Chi)]]<br />
*[[Mapping Visual and Verbal Information: Integrated Hints in Geometry (Aleven & Butcher)]]<br />
**[[Training Geometry Concepts with Visual and Verbal Sources (Burchfield, Aleven, & Butcher)]]<br />
*[[Visual Representations in Science Learning | Visual Representations in Science Learning (Davenport, Klahr & Koedinger)]]<br />
*[[Co-training of Chinese characters| Co-training of Chinese characters (Liu, Perfetti, Dunlap, Zi, Mitchell)]]<br />
*[[Co-training and pairing| The pairing effect in Chinese cotraining (Liu, Perfetti, Dunlap, Wu, Mitchell)}}<br />
*[[Learning Chinese pronunciation from a “talking head”| Learning Chinese pronunciation from a “talking head” (Liu, Massaro, Dunlap, Wu, Chen,Chan, Perfetti)]] [Was in Refinement and Fluency]<br />
*[[Visual Feature Focus in Geometry: Instructional Support for Visual Coordination During Learning (Butcher & Aleven)]]<br />
*[[Learning About Emergence and Heat Transfer (Chi)]]<br />
<br />
=== Examples and Explanations ===<br />
*[[Booth | Improving skill at solving equations through better encoding of algebraic concepts (Booth, Siegler, Koedinger & Rittle-Johnson)]]<br />
*[[McLaren_et_al_-_Studying_the_Learning_Effect_of_Personalization_and_Worked_Examples_in_the_Solving_of_Stoich_Problems | Studying the Learning Effect of Personalization and Worked Examples in the Solving of Stoichiometry Problems (McLaren, Koedinger & Yaron)]]<br />
*[[Note-Taking_Technologies | Note-taking Project Page (Bauer & Koedinger)]]<br />
**[[Note-Taking: Restriction and Selection]] (completed)<br />
**[[Note-Taking: Coordination]] (planned)<br />
*[[REAP_main | The REAP Project: Implicit and explicit instruction on word meanings (Juffs & Eskenazi)]]<br />
*[[Help_Lite (Aleven, Roll)|Hints during tutored problem solving – the effect of fewer hint levels with greater conceptual content (Aleven & Roll)]]<br />
*[[Handwriting Algebra Tutor]] (Anthony, Yang & Koedinger)<br />
**[[Lab study proof-of-concept for handwriting vs typing input for learning algebra equation-solving]] (completed)<br />
**[[Effect of adding simple worked examples to problem-solving in algebra learning]] (completed, analysis in progress)<br />
**[[In vivo comparison of Cognitive Tutor Algebra using handwriting vs typing input]] (in progress)<br />
*[[Bridging_Principles_and_Examples_through_Analogy_and_Explanation | Bridging Principles and Examples through Analogy and Explanation (Nokes & VanLehn)]]<br />
*[[Does learning from worked-out examples improve tutored problem solving? | Does learning from worked-out examples improve tutored problem solving? (Renkl, Aleven & Salden)]] [Also in Interactive Communication]<br />
*[[Ringenberg_Examples-as-Help | Scaffolding Problem Solving with Embedded Example to Promote Deep Learning (Ringenberg & VanLehn)]]<br />
*[[Roll_IPL | Invention as Preparation for Learning (Roll, Aleven, Koedinger & Schwartz)]]<br />
*[[Baker_Choices_in_LE_Space | How Content and Interface Features Influence Student Choices Within the Learning Space (Baker, Corbett, Koedinger, & Rodrigo)]]<br />
*[[Mayer_and_McLaren_-_Social_Intelligence_And_Computer_Tutors | Building Social Intelligence into Computer-Based Tutors (Mayer & McLaren)]]<br />
<br />
== Annotated Bibliography ==<br />
Much research in human and machine learning research has advocated various kinds of “multiples” to assist learning: <br />
* multiple data sources (e.g., human learning (HL): Mayer, 2001; machine learning (ML): Blum & Mitchell, 1998; Collins & Singer, 1999). <br />
* multiple representations (e.g., HL: Ainsworth & Van Labeke, 2004; ML: Liere & Tadepalli, 1997), <br />
* multiple strategies (e.g., HL: Klahr & Siegler, 1978; ML: Michalski & Tecucci 1997; Saitta, Botta, & Neri, 1993); <br />
* multiple learning tasks (e.g., HL: Holland, Holyoak, Nisbett, & Thagard, 1986; ML: Caruana, 1997; Case, Jain, Ott, Sharma, & Stephan, 1998); <br />
<br />
Experiments in human learning have demonstrated, for instance, that instruction that combines rules or principles and [[example]]s yields better results than either alone (Holland, Holyoak, Nisbett, & Thagard, 1986) or, for instance, iterative instruction of both [[Procedural tasks|procedures]] and [[Conceptual tasks|concepts]] better learning (Rittle-Johnson & Koedinger, 2002; Rittle-Johnson, Siegler, & Alibali, 2001). <br />
<br />
Experiments in machine learning have demonstrated how more robust, generalizable learning can be achieved by training a single learner on ''multiple'' related tasks (Caruana 1997) or by training ''multiple'' learning systems on the same task (Blum & Mitchell 1998; Collins & Singer 1999; Muslea, Minton, & Knoblock, 2002). Blum and Mitchell (1998) provide both empirical results and a proof of the circumstances under which strategy combinations enhance learning. In particular, the [[co-training]] approach for combining multiple learning strategies yields better learning to the extent that the learning strategies produce “uncorrelated errors” – when one is wrong the other is often right. As an example of PSLC work, Donmez et al. (2005) demonstrate, using a multi-dimensional collaborative process analysis, that regularities across ''multiple'' codings of the same data can be exploited for the purpose of improving text classification accuracy for difficult codings.<br />
<br />
An ambitious goal of PSLC is provide a rigorous causal theory of human learning results at the level of precision of machine learning research. <br />
<br />
* Ainsworth, S., Bibby, P., & Wood, D. (2002). Examining the effects of different multiple representational systems in learning primary mathematics. The Journal of the Learning Sciences, 11(1), 25–61.<br />
* Ainsworth, S.E. & Van Labeke (2004) Multiple forms of dynamic representation. Learning and Instruction, 14(3), 241-255. <br />
* Blum, A., & Mitchell, T. (1998). Combining labeled and unlabeled data with co-training. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 92–100). New York: ACM Press. Available: citeseer.nj.nec.com/blum98combining.html<br />
* Caruana, R. (1997). Multitask learning. Machine Learning 28(1), 41-75. Available: citeseer.nj.nec.com/caruana97multitask.html.<br />
* Case, J., Jain, S., Ott, M., Sharma, A., & Stephan, F. (1998). Robust learning aided by context. In Proceedings of Eleventh Annual Conference on Computational Learning Theory (COLT), (pp. 44-55). New York: ACM Press.<br />
* Collins, M., & Singer, Y. (1999). Unsupervised models for named entity classification. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (pp. 189–196).<br />
* Donmez, P., Rose, C. P., Stegmann, K., Weinberger, A., and Fischer, F. (2005). Supporting CSCL with Automatic Corpus Analysis Technology, to appear in the Proceedings of Computer Supported Collaborative Learning.<br />
* Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA: MIT Press.<br />
* Klahr D., and Siegler R.S. (1978). The Representation of Children's Knowledge. In H.W. Reese and L.P. Lipsitt (Eds.), Advances in Child Development and Behavior, Academic Press, New York, NY, pp. 61-116.<br />
* Liere, R., & Tadepalli, P. (1997). Active learning with committees for text categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence (pp. 591—596). Menlo Park, CA: AAAI Press.<br />
* Mayer, R. E. (2001). Multimedia learning. New York: Cambridge University Press.<br />
* Michalski, R., & Tecuci, G. (Eds.) (1997). Machine learning: A multi-strategy approach. Morgan Kaufmann.<br />
* Muslea, I., Minton, S., & Knoblock, C. (2002). Active + semi-supervised learning = robust multi-view learning. In Proceedings of ICML-2002. Sydney, Australia.<br />
* Rittle-Johnson, B., Siegler, R. S., & Alibali, M. W. (2001). Developing conceptual understanding and procedural skill in mathematics: An iterative process. Journal of Educational Psychology, 93(2), 346–262.<br />
* Rittle-Johnson, B., & Koedinger, K. R. (2002). Comparing instructional strategies for integrating conceptual and procedural knowledge. Paper presented at the Psychology of Mathematics Education, National, Athens, GA.<br />
* Saitta, L., Botta, M., & Neri, F. (1993). Multi-strategy learning and theory revision. Machine Learning, 11(2/3), 153–172.<br />
[[Category:Cluster]]</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7174Co-training of Chinese characters2008-03-11T14:33:42Z<p>Liuying@pitt.edu: /* Further information */</p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
<br />
The second part was posttesting, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. The It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7173Co-training of Chinese characters2008-03-11T14:32:29Z<p>Liuying@pitt.edu: /* Hypothesis */</p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
<br />
The second part was posttesting, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. The It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
*<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==<br />
<br />
www.pitt.edu/~liuying/pslc_plan.doc</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=File:Cotraining2.jpg&diff=7172File:Cotraining2.jpg2008-03-11T14:15:05Z<p>Liuying@pitt.edu: A summary of the result of cotraining study 1.</p>
<hr />
<div>A summary of the result of cotraining study 1.</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7171Co-training of Chinese characters2008-03-11T14:13:37Z<p>Liuying@pitt.edu: /* Findings */</p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
<br />
The second part was posttesting, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. The It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
*“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning.<br />
**However, this unlabeled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented.<br />
**Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
*Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
*Benefits of uncorrelated examples was not observed. <br />
**Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
**Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
**This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==<br />
<br />
www.pitt.edu/~liuying/pslc_plan.doc</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7170Co-training of Chinese characters2008-03-11T14:09:57Z<p>Liuying@pitt.edu: </p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human (Study 1)<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2005<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese in a cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
<br />
The second part was posttesting, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. The It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
-“Unlabelled paired” trials may aid learning. Learning meanings was facilitated by the addition of unlabelled paired trials that did not provide meaning.<br />
However, this unlabelled-trials effect was restricted to cross-modal pairs (spoken syllable and written character); it was absent when only one (spoken syllable) or the other (written character) modality was presented. <br />
Implication: Cross-modal inputs in this situation can establish multiple representations (speech-writing pairs) from which meaning links are more readily retrieved.<br />
-Written form learned better than spoken form Large advantage for the presentation of written characters compared with their corresponding spoken syllables for learning a form-meaning pair.<br />
-Benefits of uncorrelated examples was not observed. <br />
Correlated examples: Given font and given speaker always co-occur (conditional dependent)<br />
Uncorrelated examples: Given font occurs with all speakers; and given speaker occurs with all fonts (conditional independent)<br />
This is still being assessed by using multiple learning sessions. <br />
<br />
[[Image:cotraining2.jpg]]<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==<br />
<br />
www.pitt.edu/~liuying/pslc_plan.doc</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=File:Cotraining1.jpg&diff=7168File:Cotraining1.jpg2008-03-11T13:49:57Z<p>Liuying@pitt.edu: This figure illustrates the cotraining framework in learning Chinese characters.</p>
<hr />
<div>This figure illustrates the cotraining framework in learning Chinese characters.</div>Liuying@pitt.eduhttps://learnlab.org/wiki/index.php?title=Co-training_of_Chinese_characters&diff=7167Co-training of Chinese characters2008-03-11T13:48:58Z<p>Liuying@pitt.edu: </p>
<hr />
<div>----<br />
'''Summary Table'''<br />
*Node Title: Learning to read Chinese: [[Co-training]] in human<br />
*Researchers: Ying Liu, Charles Perfetti, Susan Dunlap, Gusheng Zi, Tom Mitchell<br />
*PIs: Ying Liu, Charles Perfetti, Tom Mitchell<br />
*Others who have contributed 160 hours or more:<br />
*Post-Docs: Gusheng Zi<br />
*Graduate Students: Derek Chan<br />
*Study Start Date Sep 1, 2005<br />
*Study End Date Dec 31, 2006<br />
*LearnLab Site and Courses , CMU Chinese Online<br />
*Number of Students: 20<br />
*Total Participant Hours for the study: 20<br />
*Data in the Data Shop: Yes<br />
----<br />
<br />
== Abstract ==<br />
The present study explored how native English speakers learn to speak and read Chinese under the cotraining environment. The experiment consisted of two parts. The first part was training, which was used to teach the input (Chinese fonts and sounds) to output (English translations) mapping of 16 Chinese characters. Training methods were manipulated in this part. A quarter of the subjects only received labeled training trials (English translation provided), the others received extra training trials with [[unlabeled examples|non-labeled trials]] (only the orthography or/and phonology without English translation). The non-labeled trials were further separated into three types: unpaired, correlated paired and uncorrelated paired, with each type used for one quarter of subjects.<br />
<br />
The second part was posttesting, in which students produced the English translation when they saw the Chinese fonts or hear the Chinese sounds one by one. The accuracy of translation was recorded. The It showed that [[unlabeled examples]] did help the learning, and uncorrelated paired examples did the best among all three types of unlabeled examples.<br />
<br />
== Glossary ==<br />
2. A glossary that defines terms used elsewhere in this node but not defined in the nodes that are parents, grandparents, etc. of this node; <br />
<br />
labeling; source pairing; source correlation.<br />
<br />
== Research question ==<br />
<br />
How native English speakers learn to speak and read Chinese under various coordinative learning conditions. <br />
<br />
== Background ==<br />
<br />
In machine learning research, it has been found that multiple-strategies and multiple modalities facilitate learning (Blum and Mitchell, 1998). However, the effectiveness of the properties of “co-training” theory have not been tested in human learners yet. We carried out this study to directly test two important properties of this theory in human learners. There are two results from the finished experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabelled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
<br />
To understand the pairs effect, we have to know whether it is restricted to or larger for [[unlabeled examples|unlabeled trials]]. Experiment 1 did not manipulate pairing in labeled trials. In the fall of 2006, we tested the pairing property under both labeled and unlabeled trails.<br />
<br />
To understand the correlation feature better, we are testing the correlation feature in an in-vivo setup with more learning sessions.<br />
<br />
== Dependent variables ==<br />
<br />
[[Normal post-test]]: Accuracy of producing the English word under reading and/or listening situation.<br />
<br />
== Independent variables ==<br />
<br />
Labeling<br />
Pairing<br />
Variation<br />
Correlation<br />
<br />
== Hypothesis ==<br />
<br />
Pairing of visual font and auditory sound of Chinese characters should enhance learning under both labeled and unlabeled trials, but the benefit is most significant when the trials are unlabeled.<br />
[[Image:cotraining1.jpg]]<br />
<br />
== Findings ==<br />
<br />
There are two results from the first experiment and one non-result of interest. Most dramatic is the advantage of written over spoken input. This has nothing to do with co-training but is interesting and important for L2 word learning (translation). Second is the pairs effect, the advantage of spoken + written input presented during unlabeled training compared with either one separately. The independence of the surface features of these inputs (specific speaker, specific font) was not a factor.<br />
Experiment 2 is under analysis and experiment 3 is collecting data.<br />
<br />
== Explanation ==<br />
<br />
Learning meanings was facilitated by the addition of unlabeled paired trials that did not provide meaning implicates that predictions of the label are generated for unlabeled trials, so they serve as self-generated labeled trials and work as meaningful materials for learning. This effect is especially significant in multiple input situation (paired trials) because the establishment of multiple representations (speech-writing pairs) makes the “label prediction” more accurate.<br />
<br />
== Descendents ==<br />
<br />
None.<br />
<br />
== Further information ==<br />
<br />
www.pitt.edu/~liuying/pslc_plan.doc</div>Liuying@pitt.edu