Affiliations: Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA. ygong@wpi.edu | Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA. josephbeck@wpi.edu | Computer Science Department, Worcester Polytechnic Institute, Worcester, MA, USA. nth@wpi.edu
Abstract: Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive accuracy on individual student practice opportunities. We explore the space of design decisions for each approach and find a set of “best practices” for each. In a head to head comparison, we find that PFA has considerably higher predictive accuracy than KT. In addition to being more accurate, we found that PFA's parameter estimates were more plausible. Our best-performing model was a variant of PFA that ignored the tutor's transfer model; that is, it assumed all skills influenced performance on all problems. One possible implication is that this result is a general one suggesting there is benefit from considering models that incorporate information from more than the typical handful of skills associated with a problem in the transfer model. Alternately, an explanation for this result is the transfer model that our tutor uses is particularly weak. We also found that both KT and PFA have relatively low predictive accuracy for cases where students generate incorrect responses, and 2/3 of the model's errors are false positives, indicating a better means of determining when students will make mistakes is needed.
Keywords: Student modeling, knowledge tracing, performance factors analysis, expectation maximization, machine learning, model fitting approaches, data aging