<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Woolerystixmaker</id>
	<title>Theory Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://learnlab.org/mediawiki-1.44.2/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Woolerystixmaker"/>
	<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Special:Contributions/Woolerystixmaker"/>
	<updated>2026-05-01T00:24:49Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Assistance_dilemma&amp;diff=12130</id>
		<title>Assistance dilemma</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Assistance_dilemma&amp;diff=12130"/>
		<updated>2011-08-26T09:25:03Z</updated>

		<summary type="html">&lt;p&gt;Woolerystixmaker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Glossary]]&lt;br /&gt;
[[Category:PSLC General]]&lt;br /&gt;
&lt;br /&gt;
A fundamental problem of instructional engineering is: when should instruction provide students with [[assistance]] and when should it withhold assistance?  We call this problem the &#039;&#039;assistance dilemma&#039;&#039; (Koedinger &amp;amp; Aleven, in press).  The dilemma emerges because there are complementary benefits and costs of providing higher vs. lower levels of instructional assistance. Lower assistance challenges students to generate and construct knowledge on their own, but may leave them floundering, frustrated and wasting time.  Higher assistance can provide students with information they will not generate on their own, but may reduce engagement or prevent formation of lasting memories. &lt;br /&gt;
&lt;br /&gt;
In fact, the assistance dilemma is a central battleground of the education wars, with one side advocating more direct instruction and drill of basic skills, that is, higher assistance, and other side advocating more student initiative, construction, discovery, and learning by doing, that is, lower assistance. Our view is that the assistance dilemma will not be resolved by determining which side is right, but by specifying dimensions of assistance and, ultimately, identifying and fitting parameters along these dimensions to determine the optimal level of assistance given the instructional goal and the students&#039; knowledge state relative to that goal.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Koedinger, K. R., &amp;amp; Aleven V. (in press). Exploring the assistance dilemma in experiments with Cognitive Tutors. &#039;&#039;Educational Psychology Review&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
[http://custom-essay-writing-service.org/index.php custom writing]&lt;/div&gt;</summary>
		<author><name>Woolerystixmaker</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Baker_-_Building_Generalizable_Fine-grained_Detectors&amp;diff=12129</id>
		<title>Baker - Building Generalizable Fine-grained Detectors</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Baker_-_Building_Generalizable_Fine-grained_Detectors&amp;diff=12129"/>
		<updated>2011-08-26T09:24:51Z</updated>

		<summary type="html">&lt;p&gt;Woolerystixmaker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Building Generalizable Fine-grained Detectors ==&lt;br /&gt;
&lt;br /&gt;
=== Summary Table ===&lt;br /&gt;
====Study 1====&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;5&amp;quot; style=&amp;quot;text-align: left;&amp;quot;&lt;br /&gt;
| &#039;&#039;&#039;PIs&#039;&#039;&#039; || Ryan Baker, Vincent Aleven&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Other Contributors&#039;&#039;&#039; || Sidney D&#039;Mello (Consultant, University of Memphis), Ma. Mercedes T. Rodrigo (Consultant, Ateneo de Manila University)&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study Start Date&#039;&#039;&#039; || February, 2010&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Study End Date&#039;&#039;&#039; || February, 2011&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Site&#039;&#039;&#039; || TBD&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;LearnLab Course&#039;&#039;&#039; || Algebra, Geometry, Chemistry, MathTutor, ScienceAssistments&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Number of Students&#039;&#039;&#039; || 78 so far; total TBD&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Total Participant Hours&#039;&#039;&#039; || 444 so far; total TBD&lt;br /&gt;
|-&lt;br /&gt;
| &#039;&#039;&#039;Data available in DataShop&#039;&#039;&#039; || [https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=431 Dataset: CMU VlabHomeworks F2010]&amp;lt;br&amp;gt;&lt;br /&gt;
[https://pslcdatashop.web.cmu.edu/DatasetInfo?datasetId=448 Dataset: Affect Detectors and Questionnaires Greenville 2010-11]&amp;lt;br&amp;gt;&lt;br /&gt;
* &#039;&#039;&#039;Pre/Post Test Score Data:&#039;&#039;&#039; TBD&lt;br /&gt;
* &#039;&#039;&#039;Paper or Online Tests:&#039;&#039;&#039; TBD&lt;br /&gt;
* &#039;&#039;&#039;Scanned Paper Tests:&#039;&#039;&#039; TBD&lt;br /&gt;
* &#039;&#039;&#039;Blank Tests:&#039;&#039;&#039; TBD&lt;br /&gt;
* &#039;&#039;&#039;Answer Keys: &#039;&#039;&#039; TBD&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
=== Abstract ===&lt;br /&gt;
This project, joint between M&amp;amp;M and CMDM, will create a set of fine-grained detectors of affect and M&amp;amp;M behaviors. These detectors will be usable by future projects in these two thrusts to study the impact of learning interventions on these dimensions of students’ learning experiences, and to study the inter-relationships between these constructs and other key PSLC constructs (such as measures of robust learning, and motivational questionnaire data). It will be possible to apply these detectors retrospectively to existing PSLC data in [[DataShop]], in order to re-interpret prior work in the light of relevant evidence on students’ affect and M&amp;amp;M behaviors. &lt;br /&gt;
&lt;br /&gt;
=== Background &amp;amp; Significance ===&lt;br /&gt;
&lt;br /&gt;
=== Glossary ===&lt;br /&gt;
&lt;br /&gt;
[[Metacognition and Motivation]]&lt;br /&gt;
&lt;br /&gt;
[[Computational Modeling and Data Mining]]&lt;br /&gt;
&lt;br /&gt;
[[Gaming the system]]&lt;br /&gt;
&lt;br /&gt;
[[Off-Task Behavior]]&lt;br /&gt;
&lt;br /&gt;
[[Affect]]&lt;br /&gt;
&lt;br /&gt;
[[Frustration]]&lt;br /&gt;
&lt;br /&gt;
[[Boredom]]&lt;br /&gt;
&lt;br /&gt;
[[Flow]]&lt;br /&gt;
&lt;br /&gt;
[[Engaged Concentration]]&lt;br /&gt;
&lt;br /&gt;
=== Hypotheses ===&lt;br /&gt;
&lt;br /&gt;
H1: We hypothesize that it will be possible to develop reasonably accurate detectors of student affect for four LearnLabs, that detect affect using only the data from the interaction between the student and the keyboard/mouse.&lt;br /&gt;
&lt;br /&gt;
H2: We hypothesize that models of behaviors such as gaming the system, and off-task behavior, in combination with models of affect/behavior dynamics, can make affect detectors more accurate.&lt;br /&gt;
&lt;br /&gt;
H3: We hypothesize that models created using data from three LearnLabs will perform significantly better than chance in data from a fourth LearnLab, with no re-training (or limited EM-based modification that requires no new labeled data). &lt;br /&gt;
&lt;br /&gt;
H4: We hypothesize that these affect models will become a valuable component of future research in the M&amp;amp;M and CMDM thrusts.&lt;br /&gt;
&lt;br /&gt;
=== Research Process ===&lt;br /&gt;
&lt;br /&gt;
We will develop detectors of the M&amp;amp;M (metacognitive &amp;amp; motivational) behaviors of gaming the system, off-task behavior, proper help use, on-task conversation, help avoidance and self-explanation without scaffolding. This set of behaviors has already been effectively detected in mathematics LearnLabs. We will model the dynamics between these behaviors and student affect (following on work in the PSLC and at Memphis), in order to be able to leverage these detectors to create detectors of the affective states of engaged concentration, boredom, confusion, and frustration (the dynamics models will enable us to set Bayesian priors for how likely an affective state is at a given time). &lt;br /&gt;
&lt;br /&gt;
These detectors will be developed for multiple LearnLabs, and the generalizability of detectors across LearnLabs will be one of the focuses of study during this project. We anticipate developing detectors for Algebra and Geometry, the Chemistry Virtual Lab, MathTutor, and Science ASSISTments. Each of these learning environments presents a context where complex learning occurs, fine-grained interaction behavior is logged, and the outputs of the detectors will provide leverage on a number of research questions of interest. &lt;br /&gt;
&lt;br /&gt;
“Ground truth” for the M&amp;amp;M behavior categories will be established through quantitative field observations. “Ground truth” for the affect categories will be established by field observations and infrequent pop-up questions. Work will be conducted to increase the reliability of quantitative field observations of affect to a standard considered appropriate by psychology journals, through repeated coding and discussion sessions and the development of a detailed coding manual based on prior work to code affect in field settings and work to code emotions from facial expressions. &lt;br /&gt;
&lt;br /&gt;
Models will be developed solely using distilled log file data of the sort currently collected in [[DataShop]] (more sophisticated sensors will NOT be included in this project). The models will be built with a combination of machine learning, and knowledge engineering (specifically, through leveraging and adapting existing knowledge engineered models such as Aleven et al’s help-seeking model and Shih et al’s self-explanation model). Generalization of models across learning environments will involve expectation maximization to adapt models to new data sets, and/or leveraging the CTLVS1 taxonomy to develop meta-models that relate prediction features to design features. We will first develop models for individual learning environments and then extend them across environments.&lt;br /&gt;
&lt;br /&gt;
=== Research Plan ===&lt;br /&gt;
&lt;br /&gt;
1.	Develop software for conducting field observations (cf. Baker et al, 2004) with PDAs and synchronizing with [[DataShop]] data -- software development completed, as of Aug 2010 synchronization verification in progress&lt;br /&gt;
&lt;br /&gt;
2.	Study and improve quantitative field coding of student affect states&lt;br /&gt;
&lt;br /&gt;
*	The Research Associate and Assistant will conduct multiple coding and discussion sessions with the PI, and develop a detailed coding manual (including some video examples)&lt;br /&gt;
 &lt;br /&gt;
3.	Collect training data (months 4-7) -- as of Aug 2010 first data set collected, other data collection in progress&lt;br /&gt;
&lt;br /&gt;
*	Starting first in one LearnLab and rolling across LearnLabs, so that we have all the data for one LearnLab first. Collecting data on all constructs at once. Then the programmer/PI can start developing detectors for constructs in first LearnLab, while the RAs keep collecting more data in the second and subsequent LearnLabs &lt;br /&gt;
*	Quantitative field observations (cf. Baker et al, 2004)&lt;br /&gt;
&lt;br /&gt;
4.	Develop detectors (months 5-8)&lt;br /&gt;
&lt;br /&gt;
*       Utilizing combination of existing data mining tools and code previously used by Baker to create Latent Response Model-based detectors of [[Gaming the System]] and [[Off-Task Behavior]] &lt;br /&gt;
&lt;br /&gt;
*	Develop and leverage behavior-affect temporal dynamics models (cf. D’Mello et al, 2007; Baker, Rodrigo, &amp;amp; Xolocotzin, 2007) to create priors for predicting affect&lt;br /&gt;
&lt;br /&gt;
*	Use log data to predict field observations, student responses&lt;br /&gt;
&lt;br /&gt;
*	Student-level cross-validation used for assessing goodness of detectors&lt;br /&gt;
&lt;br /&gt;
5.	Develop meta-detectors (months 9-12)&lt;br /&gt;
&lt;br /&gt;
*	Use expectation maximization to adapt models to new data sets&lt;br /&gt;
&lt;br /&gt;
*	Leverage the CTLVS1 taxonomy to develop meta-models that relate prediction features to design features&lt;br /&gt;
&lt;br /&gt;
*	Cross-validation at grain-size of transfer between units or corresponding (within each LearnLab) to validate appropriateness for whole LearnLab&lt;br /&gt;
&lt;br /&gt;
*	Test goodness of models when {train on 3 tutors, transfer to tutor #4} to evaluate effectiveness for entirely new tutors&lt;br /&gt;
&lt;br /&gt;
=== Independent Variables ===&lt;br /&gt;
&lt;br /&gt;
n/a (see Research Plan)&lt;br /&gt;
&lt;br /&gt;
=== Dependent Variables ===&lt;br /&gt;
&lt;br /&gt;
n/a (see Research Plan)&lt;br /&gt;
&lt;br /&gt;
=== Affective States and M&amp;amp;M Behaviors to be Modeled ===&lt;br /&gt;
&lt;br /&gt;
Affective States:&lt;br /&gt;
* Engaged Concentration (a subset of [[Flow]]) (cf. Baker et al, 2010)&lt;br /&gt;
* Boredom (Kapoor, Burleson, &amp;amp; Picard, 2007)&lt;br /&gt;
* Frustration (Kapoor, Burleson, &amp;amp; Picard, 2007)&lt;br /&gt;
&lt;br /&gt;
M&amp;amp;M Behaviors:&lt;br /&gt;
&lt;br /&gt;
* [[Gaming the system]] (Baker et al, 2004)&lt;br /&gt;
* [[Off-Task Behavior]] (Baker, 2007)&lt;br /&gt;
* Proper Help Use (Aleven et al, 2006)&lt;br /&gt;
* On-Task Conversation&lt;br /&gt;
* [[Help Avoidance]] (Aleven et al, 2006)&lt;br /&gt;
* [[Self-Explanation]] without scaffolding (Shih et al, 2008)&lt;br /&gt;
&lt;br /&gt;
=== Planned Studites ===&lt;br /&gt;
&lt;br /&gt;
In 2010, data will be collected in the Algebra, Geometry, Chemistry, MathTutor, and Science ASSISTments.&lt;br /&gt;
&lt;br /&gt;
=== Explanation ===&lt;br /&gt;
=== Further Information ===&lt;br /&gt;
=== Connections ===&lt;br /&gt;
&lt;br /&gt;
=== Annotated Bibliography ===&lt;br /&gt;
=== References ===&lt;br /&gt;
&lt;br /&gt;
Aleven, V., McLaren, B., Roll, I., &amp;amp; Koedinger, K. (2006). Toward meta-cognitive tutoring: A model of help seeking with a Cognitive Tutor. International Journal of Artificial Intelligence and Education, 16, 101-128.&lt;br /&gt;
&lt;br /&gt;
Baker, R.S.J.d. (2007) Modeling and Understanding Students&#039; Off-Task Behavior in Intelligent Tutoring Systems. Proceedings of ACM CHI 2007: Computer-Human Interaction, 1059-1068.&lt;br /&gt;
&lt;br /&gt;
Baker, R.S., Corbett, A.T., Koedinger, K.R., Wagner, A.Z. (2004) Off-Task Behavior in the Cognitive Tutor Classroom: When Students &amp;quot;Game The System&amp;quot;. Proceedings of ACM CHI 2004: Computer-Human Interaction, 383-390. &lt;br /&gt;
&lt;br /&gt;
Baker, R.S.J.d., Rodrigo, M.M.T., Xolocotzin, U.E. (2007) The Dynamics of Affective Transitions in Simulation Problem-Solving Environments. Proceedings of the Second International Conference on Affective Computing and Intelligent Interaction.&lt;br /&gt;
&lt;br /&gt;
D&#039;Mello, S. K., Picard, R. W., and Graesser, A. C. (2007) Towards an Affect-Sensitive AutoTutor. Special issue on Intelligent Educational Systems – IEEE Intelligent Systems, 22(4), 53-61. &lt;br /&gt;
&lt;br /&gt;
Kapoor, A., Burleson, W., &amp;amp; Picard, R. W. (2007). Automatic prediction of frustration. International Journal of Human-Computer Studies, 65, 724-736.&lt;br /&gt;
&lt;br /&gt;
Shih, B., Koedinger, K., and Scheines, R. (2008) A Response Time Model for Bottom-Out Hints as Worked Examples. Proceedings of the 1st International Conference on Educational Data Mining, 117-126. &lt;br /&gt;
&lt;br /&gt;
=== Future Plans ===&lt;br /&gt;
[http://custom-essay.ws/index.php essay papers]&lt;/div&gt;</summary>
		<author><name>Woolerystixmaker</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Application_of_SimStudent_for_Error_Analysis&amp;diff=12128</id>
		<title>Application of SimStudent for Error Analysis</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Application_of_SimStudent_for_Error_Analysis&amp;diff=12128"/>
		<updated>2011-08-26T09:24:27Z</updated>

		<summary type="html">&lt;p&gt;Woolerystixmaker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==Towards a theory of learning errors==&lt;br /&gt;
&lt;br /&gt;
===Personnel===&lt;br /&gt;
&lt;br /&gt;
*PI: Noboru Matsuda&lt;br /&gt;
*Key Faculty: William W. Cohen, Kenneth R. Koedinger&lt;br /&gt;
&lt;br /&gt;
===Abstract===&lt;br /&gt;
&lt;br /&gt;
The purpose of this project is to study how students &#039;&#039;fail to learn&#039;&#039; correct knowledge components when studying from examples and make (typical) errors by applying such incorrect knowledge components later when solving problems.  We utilize a computational model of learning, called [http://www.SimStudent.org SimStudent] that learns cognitive skills inductively from examples either by passively reviewing worked-out examples or by actively engaged in tutored problem-solving.  &lt;br /&gt;
&lt;br /&gt;
We are particularly interested in studying how the differences in prior knowledge affect the nature and rate of learning. We hypothesize that when students rely on shallow, domain general features (which we call &amp;quot;weak&amp;quot; features) as opposed to deep, more domain specific features (&amp;quot;strong&amp;quot; features), then students would more likely to make induction errors. &lt;br /&gt;
&lt;br /&gt;
To test this hypothesis, we control SimStudent&#039;s prior knowledge to study how and when erroneous skills are learned by analyze learning outcomes (both the process of learning and the performance on the post-test).&lt;br /&gt;
&lt;br /&gt;
===Overview of SimStudent===&lt;br /&gt;
&lt;br /&gt;
A fundamental technology used for SimStudent is called Inductive Logic Programming (Muggleton, 1999) as an application for programming by demonstration (Cypher, 1993). Prior to learning, SimStudent is given a set of &#039;&#039;feature predicates&#039;&#039; and &#039;&#039;operators&#039;&#039; as prior knowledge. &lt;br /&gt;
&lt;br /&gt;
Feature predicate is a Boolean function to test an existence of a certain feature. For example, isPolynomial(&amp;quot;3x+1&amp;quot;) returns true, but isConstantTerm(&amp;quot;3x&amp;quot;) returns false. An operators, on the other hand, is a more generic function to manipulate various form of objects involved in a target task. For example, addTerm(&amp;quot;3x&amp;quot;, &amp;quot;2x&amp;quot;) returns &amp;quot;5x&amp;quot; and getCoefficient(&amp;quot;-4y&amp;quot;) returns &amp;quot;-4.&amp;quot; &lt;br /&gt;
&lt;br /&gt;
To learn cognitive skills, SimStudent generalizes &#039;&#039;examples&#039;&#039; of each individual skill applications. There are two types of examples necessary to given to SimStudent: (1) positive examples that show when to apply a particular skill, and (2) negative examples that show when &#039;&#039;not&#039;&#039; to apply a particular skill. &lt;br /&gt;
&lt;br /&gt;
Positive examples are acquired either from (1) steps demonstrated in worked-out examples, (2) steps demonstrated as a hint during tutoring, and (3) steps performed correctly by SimStudent itself during tutoring. In either case, a context of a skill application (i.e., a problem status) is stored as a positive examples for that particular skill.  &lt;br /&gt;
&lt;br /&gt;
Negative examples are acquired either when (1) a positive example is generated, or (2) SimStudent made an error during tutoring. When a positive example is made for a certain skill, say S, the example also becomes negative examples for all other skills than S. Such an example is called &#039;&#039;implicit negative example.&#039;&#039;  An implicit negative example becomes a positive example if the corresponding skill is applied in the specified situation. &lt;br /&gt;
&lt;br /&gt;
Given a set of positive and negative examples for a skill, SimStudent generates a hypothesis (in the form of production rule) representing when and how to apply the skill. The hypothesis is generated so that it applies to all positive examples and none of the negative examples.&lt;br /&gt;
&lt;br /&gt;
===Background and Significance===&lt;br /&gt;
&lt;br /&gt;
There are a number of models of student errors proposed so far (Brown &amp;amp; Burton, 1978; Langley &amp;amp; Ohlsson, 1984; Sleeman, Kelly, Martinak, Ward, &amp;amp; Moore, 1989; Weber, 1996; Young &amp;amp; O&#039;Shea, 1981).  Our effort builds on the past works by exploring how differences in prior knowledge affect the nature of the incorrect skills acquired and the errors derived. We are particularly interested in errors that are made by applying incorrect skills, and our computational model explains the processes of learning such incorrect skills as incorrect induction from examples.&lt;br /&gt;
&lt;br /&gt;
We hypothesize that incorrect generalizations are more likely when students have weaker, more general prior knowledge for encoding incoming information. This knowledge is typically perceptually grounded and is in contrast to deeper or more abstract encoding knowledge.  An example of such perceptually grounded prior knowledge is to recognize 3 in x/3 simply as a number instead of as a denominator. Such an interpretation might lead students to learn an inappropriate generalization such as &amp;quot;multiply both sides by a number in the left hand side of the equation&amp;quot; after observing x/3=5 gets x=15. If this generalization gets applied to an equation like 4x=2, the error of multiplying both sides by 4 is produced. &lt;br /&gt;
&lt;br /&gt;
We call this type of perceptually grounded prior knowledge &amp;quot;weak&amp;quot; prior knowledge in a similar sense as Newell and Simon’s weak reasoning methods (1972). Weak knowledge can apply across domains and can yield successful results prior to domain-specific instruction.  However, in contrast to &amp;quot;strong&amp;quot; domain-specific knowledge, weak knowledge is more likely to lead to incorrect conclusions. &lt;br /&gt;
&lt;br /&gt;
In general, a particular example can be modeled both with weak and strong operators. For example, suppose a step x/3=5 gets demonstrated to &amp;quot;multiply by 3.&amp;quot; Such step can be explained by a strong operator getDenominator(x/3), which returns a denominator of a given fraction term and multiply that number to both sides. On the other hand, the same step can be explained by a weak operator &lt;br /&gt;
getNumberStr(x/3), which returns the left-most number in a given expression. In this context, the operator getNumberStr() is considered to be weaker than the operator getDemonimator(), because a production rule with getNumberStr() explains broader errors. For example, imagine how we could model the error schema for &amp;quot;multiply by A.&amp;quot; This error schema can be modeled with getNumberString() and multiply() – get a number and multiply both sides by that number. Without the weak operator, we need to have different (disjunctive) production rules to model the same error schema for different problem schemata – getNumerator() for A/v=C and getCoefficient() for Av=C.  &lt;br /&gt;
&lt;br /&gt;
===Human Students Error Analysis===&lt;br /&gt;
&lt;br /&gt;
===Research Question===&lt;br /&gt;
&lt;br /&gt;
How do the differences in prior knowledge affect the type and rate of learning errors?  Especially, does &amp;quot;weak&amp;quot; prior knowledge foster more induction errors than &amp;quot;strong&amp;quot; prior knowledge, and if so, to what extent does such &amp;quot;weak&amp;quot; prior-knowledge learning account for errors that (human) students commonly make? &lt;br /&gt;
&lt;br /&gt;
===Hypothesis===&lt;br /&gt;
&lt;br /&gt;
Since &amp;quot;weak&amp;quot; prior knowledge applies broader context than &amp;quot;strong&amp;quot; prior knowledge, when given &amp;quot;weak&amp;quot; prior knowledge SimStudent would learn overly general rules that make more human-like errors. &lt;br /&gt;
&lt;br /&gt;
===Study Variables===&lt;br /&gt;
&lt;br /&gt;
====Independent Variable====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Prior knowledge&#039;&#039;&#039;: implemented as &amp;quot;operator&amp;quot; and &amp;quot;feature predicates&amp;quot; for SimStudent. &lt;br /&gt;
&lt;br /&gt;
====Dependent Variables====&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Step score&#039;&#039;&#039;: For a quantitative assessment, we computed a &#039;&#039;step score&#039;&#039; for each step in the test problems as follows: 0 if there is no correct rule application made, otherwise it is a ratio of the number of correct rule applications to the number of all rule applications allowing SimStudent to show all possible rule applications on the step. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Error prediction&#039;&#039;&#039;: For a qualitative assessment, we are particularly interested in errors made by applying learned rules as well as the accuracy of prediction. Given a step &#039;&#039;S&#039;&#039; performed by a human student at an intermediate state &#039;&#039;N&#039;&#039;, SimStudent is asked to compute a conflict set on &#039;&#039;N&#039;&#039;. Rule application R&#039;&#039;i&#039;&#039; (&#039;&#039;i&#039;&#039; = 1, …, &#039;&#039;n&#039;&#039;) is coded as follows:&lt;br /&gt;
&lt;br /&gt;
: True Positive: R&#039;&#039;i&#039;&#039; yields the same step as &#039;&#039;S&#039;&#039;, and &#039;&#039;S&#039;&#039; is a correct step.&lt;br /&gt;
: False Positive: R&#039;&#039;i&#039;&#039; yields a correct step that is not same as &#039;&#039;S&#039;&#039; (&#039;&#039;S&#039;&#039; may be incorrect).&lt;br /&gt;
: False Negative: R&#039;&#039;i&#039;&#039; yields an incorrect step that is not same as &#039;&#039;S&#039;&#039; (&#039;&#039;S&#039;&#039; may be correct).&lt;br /&gt;
: True Negative: R&#039;&#039;i&#039;&#039; yields the same step as &#039;&#039;S&#039;&#039; and &#039;&#039;S&#039;&#039; is an incorrect step.&lt;br /&gt;
&lt;br /&gt;
Error prediction is computed as True Negative / (True Negative + False Negative) to understand how well SimStudent predicted human-like errors.&lt;br /&gt;
&lt;br /&gt;
===Findings===&lt;br /&gt;
&lt;br /&gt;
====Learning Curve====&lt;br /&gt;
&lt;br /&gt;
Figure 1 shows average step score, aggregated across the test problems and student conditions. The X-axis shows the number of training iterations.&lt;br /&gt;
&lt;br /&gt;
The Weak-PK and Strong-PK conditions had similar success rates on test problems after the first 8 training problems.  After that, the performance of the two conditions began to diverge. On the final test after 20 training problems, the Strong-PK condition was 82% correct while the Weak-PK was 66%, a large and statistically significant difference (t = 4.00, p &amp;lt; .001).  &lt;br /&gt;
&lt;br /&gt;
A simple fit to power law functions to the learning curves (converting success rate to log-odds) showed that the slope (or rate) of the Weak-PK learning curve (.78) is smaller (or slower) than that of the Strong-PK learning curve (.82).  We then subtracted the two functions in their log-log form and verified in a linear regression analysis that the coefficient of the number of training problems (which predicts the difference in rate) is significantly greater than 0 (p &amp;lt; .05).&lt;br /&gt;
&lt;br /&gt;
[[Image:NM-LearningCurve.jpg]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Average step score after each of the 20 training problems for SimStudents with either strong or weak prior knowledge.&lt;br /&gt;
&lt;br /&gt;
====Error Prediction====&lt;br /&gt;
&lt;br /&gt;
Figure 2 shows a number of true negative predictions made on the test problems for each of the training iterations. &lt;br /&gt;
&lt;br /&gt;
Surprisingly, the Weak PK condition did make as many as 22 human-like errors on the 11 test problems. On the other hand, the Strong PK condition hardly made human-like errors. &lt;br /&gt;
&lt;br /&gt;
[[Image:NM-Num-TN-Prediction.jpg]]&lt;br /&gt;
&lt;br /&gt;
Figure 2: Number of True Negative predictions, which are the same errors made both by SimStudent and human students on the same step in the test problems.&lt;br /&gt;
&lt;br /&gt;
===Publications===&lt;br /&gt;
&lt;br /&gt;
*Matsuda, N., Lee, A., Cohen, W. W., &amp;amp; Koedinger, K. R. (2009; to appear). A Computational Model of How Learner Errors Arise from Weak Prior Knowledge. In Conference of the Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
&lt;br /&gt;
*Booth, J. L., &amp;amp; Koedinger, K. R. (2008). Key misconceptions in algebraic problem solving. In B. C. Love, K. McRae &amp;amp; V. M. Sloutsky (Eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society (pp. 571-576). Austin, TX: Cognitive Science Society.&lt;br /&gt;
&lt;br /&gt;
*Muggleton, S. (1999). Inductive Logic Programming: Issues, results and the challenge of Learning Language in Logic. Artificial Intelligence, 114(1-2), 283-296.&lt;br /&gt;
&lt;br /&gt;
*Cypher, A. (Ed.). (1993). Watch what I do: Programming by Demonstration. Cambridge, MA: MIT Press.&lt;br /&gt;
&lt;br /&gt;
[http://custom-essay-writing-service.org/index.php custom essay writing]&lt;/div&gt;</summary>
		<author><name>Woolerystixmaker</name></author>
	</entry>
	<entry>
		<id>https://learnlab.org/mediawiki-1.44.2/index.php?title=Analogical_comparison&amp;diff=12127</id>
		<title>Analogical comparison</title>
		<link rel="alternate" type="text/html" href="https://learnlab.org/mediawiki-1.44.2/index.php?title=Analogical_comparison&amp;diff=12127"/>
		<updated>2011-08-26T09:23:30Z</updated>

		<summary type="html">&lt;p&gt;Woolerystixmaker: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Analogical comparison operates through aligning and mapping two example problem representations to one another and then extracting their commonalities (Gentner, 1983; Gick &amp;amp; Holyoak, 1983; Hummel &amp;amp; Holyoak, 2003). This process discards the elements of the knowledge representation that do not overlap between two examples but preserves the common elements. The resulting knowledge organization typically consists of fewer superficial similarities (than the examples) but retains the deep causal structure of the problems.&lt;br /&gt;
&lt;br /&gt;
===References===&lt;br /&gt;
*Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy, &#039;&#039;Cognitive Science, 7&#039;&#039;, 155-170.&lt;br /&gt;
*Gick, M. L., &amp;amp; Holyoak, K. J. (1983). Schema induction and analogical transfer. &#039;&#039;Cognitive Psychology, 15&#039;&#039;, 1-38.&lt;br /&gt;
*Hummel, J. E., &amp;amp; Holyoak, K. J. (2003). A symbolic-connectionist theory of relational inference and generalization. &#039;&#039;Psychological Review, 110&#039;&#039;, 220-264.&lt;br /&gt;
&lt;br /&gt;
[[Category: Learning Processes]]&lt;br /&gt;
[[Category: Glossary]]&lt;br /&gt;
[[Category: Coordinative Learning]]&lt;br /&gt;
&lt;br /&gt;
[http://custom-essay-writing-service.org/index.php custom writing]&lt;br /&gt;
&lt;br /&gt;
[http://custom-essay-writing-service.org/index.php essay writing service]&lt;/div&gt;</summary>
		<author><name>Woolerystixmaker</name></author>
	</entry>
</feed>