Talk:Does learning from worked-out examples improve tutored problem solving?

From LearnLab
Revision as of 22:02, 20 November 2006 by Michael-Ringenberg (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

In the Explanation section, I think that you could be a little clearer about what you expect the participants to learn. Are they increasing their deep feature perception? Are they refining their knowledge components?

I think it would be a very good idea to fill in the glossary items, particularly "Self-explanation." I am not sure that your definition would completely match Chi's as she has defined them in the context of example study to be either derivational or procedural (where does this step come from or why was this step done). It seems that the self-explanations in this are concerned more with the derivational type, which leads me to believe that this study is concerned more with "deep feature perception" as the underlying important feature from the PSLC framework.

I would also argue that in the event space tree, if a student enters a correct step but used shallow strategies, like in 1.2 and 4.2, then they learned, it is just that they did not learn or reinforce the correct knowledge component.

I think it would also be interesting to add an analysis of the logfiles to look at the behavior of the students. If I understand your theory correctly, then it should be the case that the problem solving only students would need more assistance from the tutor, particularly at the beginning than the example studiers.

I also think that further specifying the dependent variables and how you expect each group to perform on the measures will help clarify the study. For example, if the conceptual knowledge task is simply solving similar items on a test a day later, then you should say so and that you, for example, expect the performance of both groups to be similar.