Talk:Ringenberg Examples-as-Help

From LearnLab
Jump to: navigation, search

Interactive Communication Cluster: Wiki Page Reviews and Discussion

Date:March 5, 2007
Reviewer: Kirsten Butcher


Overall: The study description is easy-to-read and the independent variables are clear. Results listed in the abstract sound very interesting and make me excited to read the rest of the study.

Abstract: The abstract is succinct and clear, but there are a couple things that may help the reader understand the manipulation and its important.

First, it is easy to trip up on the term "completely justified example" (I wonder why you choose this term instead of the "annotated, worked examples" that appears in your paper title?) So, it was very helpful that you included a glossary entry for the term. But I still have some confusion. To what does "completely" refer to? Is it the number of steps shown, the inclusion of the explanation, or both? The explanations/rationale for the steps don't seem very complete (especially compared to the expanded text that one often finds in standard hint sequences, although your hint sequence screen shot doesn't seem to include any conceptual content so I'm not sure about the nature of Andes hint sequences). I assume the hints are "completely justified" in the sense that they give you all the steps for the problem (and add a bit of context by supplying a description). But it may help drive the point home to add a contrasting example to the glossary entry -- that is, to add a description of how the completely justified hints differ from standard hint sequences in the glossary entry. Also, adding a very brief description of each condition to the abstract text would help the reader understand the relevant comparisons more easily (as is, the reader must first follow glossary links to develop an understanding of each condition separately, then must infer the relevant contrasts).

Second, the abstract doesn't explain the theoretical rational for why these conditions were chosen/tested nor why differences might be expected. It would strengthen the impact of the study to include a brief description of these theoretical issues in the abstract, with a connection one or more of the Interactive Communication Cluster's main research questions. As is, it is difficult to see what aspects of the study concern "Interactive Communication."

Background and Significance: Don't forget to add the citations indicated. More importantly, some conceptual information could make this section more clear. First, "help abuse" in this context seems to refer to the progression through a full hint sequence and, as you indicate, could be problematic or it could be a useful strategy. Thus, it seems that the critical question is whether students are using the full example sequence as a shallow learning strategy or as a worked-example? I assume that this distinction is what you are testing with the current manipulation. However, it would be useful to make this approach explicit. I would start by describing the problem without the loaded terminology, then making the distinction between the two possibilities. (e.g., ...students often view entire hint sequences before returning to a problem step. This approach could consitute "help abuse," or it could indicate that students are using hints as worked examples. In this study...)

A related question is whether you will split the standard hint condition by behavioral data (those who show potential "hint abuse" and those who don't). Seems that the relevant comparison is between the hint abusers (only) and the worked examples group?

Independent Variables: Looks good, but just one request for clarification here. The example being given in the "completely justified hint" is the exact problem being completed? It's not a different but isomorphic example, correct?

Dependent Variables & Results: Minor point -- To be consistent among study pages, dependent variables should be split from results (and results should be renamed "Findings.")

I have a couple questions about your presentation of results and their implications. First, why is a result for the number of training problems completed listed under "Transfer task, deep structure assessment"? It seems that you want to make an efficiency argument related to the lack of differences in the transfer task, but it seems odd to do it here when you have a separate section for "Homework," with subsections for the number of problems completed and participant time on task. Perhaps efficiency is a topic for an explanation section? Second, it would be useful to describe these transfer tasks with an example. Third, perhaps you should note explicitly that the time on task differences is attributable solely (as far as I can tell) to the number of problems completed, not to effort on each problem or efficiency/lack of errors within a problem.

I'd love to see some graphs and stats here, to get an idea for what is truly statistically significant and to understand the magnitude of difference that is being described.

Explanation: This section needs to be added. I think it is going to be very important to tie your study's research questions and findings to key research questions addressed in the Interactive Communication Cluster. The links are not totally clear to me, so your framing will be important.

I assume you might want to argue that your "efficiency" results suggest that students using the worked examples are doing more "self-explanation" or using the hints in more generative ways? But I wonder what other types of processes might explain your results? What about an attention argument -- that the worked examples force students to attend to the most relevant information in hints? Or a cognitive load explantion (of which I'm generally not a fan, but I can see it being suggested by a critic), where the worked examples simply decrease extraneous cognitive load created by processing more length hints (assuming that Andes' standard hint sequences include more text).

Overall, nice study and congrats on the paper award! Let me know if you have questions about my comments.