Difference between revisions of "Collected User Requests"

From LearnLab
Jump to: navigation, search
(Web Services)
(KC Modeling)
Line 77: Line 77:
 
* Ken Koedinger, email 3/4/2009
 
* Ken Koedinger, email 3/4/2009
 
* KC Model Import - save the file used to create the KCMs in case we need to recreate them.
 
* KC Model Import - save the file used to create the KCMs in case we need to recreate them.
 +
 +
==== Generate new KC Models with LFA ====
 +
* It would be nice to generate new KC Models with Hao's LFA code
 +
* Would need to specify factors.
 +
* Ideas on where this could run?
 +
** On a separate server? Request it to be run, specify duration. Have separate server queue up requests, email user when done.
 +
** In Java Applet on client machine? [Phil]
 +
 +
==== Log Likelihood and MAD ====
 +
* Hao
 +
* Log Likelihood, MAD problem, MAD step (store and show)
  
 
== Help ==
 
== Help ==

Revision as of 19:04, 2 September 2009

Annotations

Have a link from the DataShop to the Theory Wiki (Dataset to Project Page)

  • Michael Bett, ET Mtg 11/14/2007
  • From meeting minutes:
    • Michael: Link the dataset to the project page? In the pipeline have a clickable link to the project page (make project name clickable).
    • Brett: Link to a dataset directly? Is that obvious to users? Click on dataset link -> log in -> redirected back to dataset.

Annotations on transaction level

  • Ryan, ET Mtg 12/5/2007
  • Has models which can annotate something like: gaming, bored, etc.

Annotate Pages

  • Ryan, DS Team Mtg 5/23/2008
  • See cool thing created by Jeffery Heer where all the settings of the page were recorded with the comment.

Annotations on the student level

  • Ido Roll, Interview with Brett Leber, 1/19/2009
  • Can annotate on student level, a percentage the student is gaming
    • Ido has mentioned annotations on the student level

Dataset Discussion - Capture data-integrity issues

  • Ken Koedinger, Team Meeting, 8/15/2009
  • As a stakeholder in the DataShop project, I want to capture and publicize the data-integrity issues discovered with data sets so that data is better documented (and so we've fulfilled a promise to our funders to better document data).
  • As a user of DataShop, I want to discuss datasets and have that discussion attached to the dataset so that others can comment and better understand any data-integrity issues I've found.

Linking to internal pages

Data Modeling

Non-KC Modeling

Automatic Distillation

  • Ryan, Summer 2008, Startup Memo
  • As an educational data miner wishing to develop a machine learned model with PSLC data, I would like to be able to automatically distill data features (e.g. custom fields) commonly used in past educational data mining research for a new data set (see, for instance, Baker, Corbett, Roll, & Koedinger, 2008 in UMUAI)
  • Could be implemented as a plug-in
  • Also interested in this feature idea:
    • Dan Franklin, Oct 2008

Upload model and apply it to new data set

  • EDM researcher would like to take a model, expressable as a linear formula on DataShop fields, or a simple code procedure (e.g. Bayesian Knowledge Tracing, which Ryan has code for), and apply it to a new data set [Maxine, Sept 2008; Ryan, Sept 2008; required for prior Hao request]
  • May work best as a plug-in
    • Code to display GUI to choose which data sets to use, calls model code, re-import to DataShop
    • Good to have a way to apply many models, as soon as you import a data set
  • Phil has an idea that maybe fits within this one. Please move if there's a better category [Brett Leber]

This [transaction? kc? --ed.] relabeling is really mostly about enabling modeling in DataShop right? With this in mind, I think that it is actually a higher priority to have model alternatives in DataShop.... E.g. Investigators should be able to give you chunks of Java code according to a certain specification, and DataShop should be then able to run these over datasets (perhaps after a certain series of QA occurs according to an SOP) when the investigator clicks some button in DataShop.... Obviously this is a much large project than adding columns, but it is also much more important in my mind.
--Phil Pavlik, email to Brett on 1/14/2009

  • Examples:
    • Example: running gaming detector in multiple tutors and comparing gaming frequencies
    • Example: applying Bayesian Knowledge Tracing to a new data set from the same LearnLab
    • Example: applying Ben Shih's models to many data sets [Ben Shih should be included in design of this feature; he is interested, and has a lot of good ideas]

Add Different Predicted Values

  • Ken Koedinger, ET Meeting, 10/10/2007
  • Would also like to add statistics, different predicted values than what LFA produces.

Bayesian Knowledge Tracing

  • Ryan Baker, Startup Memo, Summer 2008
  • Bayesian Knowledge-Tracing built into DataShop like LFA is [Ryan, startup memo, summer 2008]

KC Modeling

Automatically discovering new KC model

  • Vincent Aleven, Sept 2008
  • Possible to run some code (perhaps Hao's KC model selection code, perhaps something else generated by CMDM thrust) to find new best KC model.
  • As a learning sciences researcher, I would like DataShop to discover a new/better KC model for me.
  • Could be done as a plug-in

Same Skill Twice on Same Step

  • Ken Koedinger, email, 2/4/2009
  • Would like to be able to apply the same skill to a step twice during a KC Model Import.

Save KC Model Import Files

  • Ken Koedinger, email 3/4/2009
  • KC Model Import - save the file used to create the KCMs in case we need to recreate them.

Generate new KC Models with LFA

  • It would be nice to generate new KC Models with Hao's LFA code
  • Would need to specify factors.
  • Ideas on where this could run?
    • On a separate server? Request it to be run, specify duration. Have separate server queue up requests, email user when done.
    • In Java Applet on client machine? [Phil]

Log Likelihood and MAD

  • Hao
  • Log Likelihood, MAD problem, MAD step (store and show)

Help

Home Page

Import

Miscellaneous

Analyses by LearnLab

  • Organize data by LearnLab, not by "data set" [Ryan, Aug 2008; Bob, Sep 2008; Maxine, Sep 2008]
  • Essentially, current data sets become samples, but the top-level unit is the LearnLab. You can take every data set in a LearnLab together as a sample.
  • Implies being able to run analyses across data sets, and export multiple data sets together; to create multi-data set samples
  • As a user of DataShop, I would like to look at learning curves for all Algebra data together (for example), or export all Algebra data
  • Important long-term, but is a lot of work -- in particular, we need to solve scalability issues first.

Save Settings Between Sessions

  • Bob Hausmann, User Meeting, 2/1/2008
  • DS could save settings between sessions.
    • "I do a lot of redoing the same steps" (eg, set cutoffs, select a KC model, select students).

Multiple steps per transaction

  • Kurt van Lehn, Feb 2007
  • Needed so that we do not have to create multiple transactions for the same actual action for Andes logs.

Demographic data

  • This has been mentioned by NSF visitors, AB, ESL, and some researchers.
  • Also mentioned at Winter Workshop 1/23/2008.
    • Derek/Sue-mei: Student background information not in DataShop. Would like to see a student or set of students from a particular demographic, and view them across datasets!
  • Note that Gail added demographic data to Additional Notes field on the Dataset Info page for many datasets. The idea here is to put that data into the database somewhere.

Single Sign On

  • Michael Bett, email, 10/8/2007
  • It would be nice if the following services have a single login account/password:
    1. Theory Wiki
    2. Learnlab.org
    3. ESL's OSS
    4. DataShop

Navigation Bar

New Visualizations/Reports

Reports

Dataset Info

Pointers to Hard-copy Data

  • Brett van de Sande, NSF Site Visit, 5/28/2008
  • Pointers to hard-copy data such as paper tests and/or homework.  Include contact information.  It doesn't seem to make sense to scan a whole filing cabinet of paper if no one wants to look at it.  And any secondary researchers don't know about the filing cabinet to ask for it.

Error Report

Export

Learning Curve

Performance Profiler

Sample Selector

Web Services



See prioritized items on DataShop Feature Wish List.