A word-experience model of Chinese character learning

From LearnLab
Revision as of 23:57, 6 April 2007 by Erik-Reichle (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

ABSTRACT

Computational models of learning can advance the science of learning by making explicit assumptions about the learning process, thereby providing model-based hypotheses that can be tested by data. Our intention is to demonstrate how a model that we have developed for English word reading (Reichle & Perfetti, 2003) can be applied to learning to read characters in Chinese. The basic model assumptions are: (1) that the ability to read words is acquired on a word-by-word basis, and (2) that the generalized (robust) learning of a word is the result of many different encoding (reading) contexts whose variability becomes less important with repeated encounters. On this account, a robust, context-general word representation results from an extraction of a stable form from its many experienced variations. The application of this model to Chinese character learning is especially compelling because, more than with alphabetic learning, which provides generalizations across words, Chinese learning is character-by-character. The proposed modeling project will thus result in a computational framework for examining the consequences of contextual variability on the learning and retention of lexical information in a specific academic domain—the acquisition of a second language. The project will also provide a more general analytical framework for examining the factors that contribute to robust learning across many domains.