Composition Effect Kao Roll - old, please keep

From LearnLab
Revision as of 16:52, 27 November 2006 by Idoroll (talk | contribs) (The Composition Effect - What is the Source of Difficulty in Problems which Require Application of Several Skills?)
Jump to: navigation, search

The Composition Effect - What is the Source of Difficulty in Problems which Require Application of Several Skills?

Ido Roll, Yvonne Kao, Kenneth E. Koedinger

Abstract

Composite problems, i.e., problems that require the application of more than one skill, are shown to be harder than a collection of single-step problems requiring the application of the same set of skills. A common explanation is that the composition itself imposes another difficulty level. However, an alternative explanation suggests that the composition makes the application of the individual skills harder. According to that explanation, poor feature validity and shallow domain rules make it harder on students to apply the individual skills correctly in the cluttered environment of composite problems, regardless the need to apply additional skills. Our study investigates these issues in two ways: (1) Having a DFA which evaluates performance on composite problems and single-step problems using the same data, and (2) By evaluating the effect of instruction targeting a common misconception in single-step problems on composite problems.

Glossary

- Composite problems: Problems which require the application of several skills, such as solving 3x+6=0 for x.

- Single-step problems: Problems which require the application of a single skill, such as y+6=0 or 3x=-6

- DFA (Difficulty Factor Analysis): A test that includes pairs of items varying along one dimension only. It allows to evaluate the difficulty level of the single dimensions along which the problems differ.

- The Composition Effect: The effect according to which composite problems are harder than a set of single-step problems using the same skills.


Research question

What is the main source of difficulty in composite problems?


Background and Significance

Significance: This study can shed some light on the source of difficulty on composite problems, and thus can inform the design of relevant instruction and remediation.

Independent Variables

An instruction in the form of solved-example, targeting a common misconception - identifying base and hight in a cluttered environment.

Dependent variables

Three tests are used in the study: - Pre-test: given before all instruciton - Mid-test: given after students learned about single-step problems and before composite problems - Post-test: after students have learned and practice all material.

The tests include the following items. Some of which are transfer items, evaluating robust learning, since they require and adaptive application of the knowledge learned and practiced in class.

  • Simple diagram:
    1. no distractors, canonical orientation
    2. distractors, canonical orientation
    3. no distractors, tilted orientation
    4. distractors, tilted orientation
  • Complex diagram:
    1. Given complex diagram, ask for skill A
    2. Given complex diagram, ask for skill B
    3. Given steps A and B, ask for skills C (which requires A and B)
    4. Given complex diagram, ask for C (which requires A and B)

Hypothesis

  1. The difficulty level in composite problems originates in poor feature validity of the single skills.
    • An operationalized version of this hypothesis is that performance on items of type "Find measure C based on diagram" will be equivalent to the multiplication of success rate on items "Find measure C based on items A and B", "Find measure A based on diagram", and "Find measure B based on diagram".
  1. Tilted orientation and distractors still imposes difficulty even once students mastered the skills
  2. Direct instruction during the test which targets these misconceptions in the form of a solved example can improve performance

Findings

None yet.

Explanation

Descendents

Annotated bibliography

Bransford (2000). How people learn: brain, mind, experience, and school National Academy Press. Heffernan, N.T., & Koedinger, K.R. (1997) The composition effect in symbolizing: The role of symbol production vs. text comprehension. in proceedings of Nineteenth Annual Conference of the Cognitive Science Society, 307-12. Hillsdale, NJ: Erlbaum. Koedinger, K.R., & Anderson, J.R. (1997). Intelligent Tutoring Goes to School in the Big City. International Journal of Artificial Intelligence in Education 8, 30-43 Koedinger, K. R. & Cross, K. (2000). Making informed decisions in educational technology design: Toward meta-cognitive support in a cognitive tutor for geometry. Presented at the annual meeting of the American Educational Research Association, New Orleans, LA. Owen, E., & Sweller, J. (1985). What do students learn while solving mathematics problems? Journal of Educational Psychology, 77, 272-284. Simon, H. A., & Lea, G. (1974). Problem solving and rule induction: A unified view. In L. W. Gregg (Ed.), Knowledge and cognition. Hillsdale, NJ: Erlbaum.