Validity and Reliability of Student Models for Problem-Solving Activities
Authors | |
---|---|
Year of publication | 2021 |
Type | Article in Proceedings |
Conference | Proceedings of the 11th International Conference on Learning Analytics and Knowledge |
MU Faculty or unit | |
Citation | |
web | https://dl.acm.org/doi/10.1145/3448139.3448140 |
Doi | http://dx.doi.org/10.1145/3448139.3448140 |
Keywords | student modeling; skills; difficulties; validity; reliability; performance measures; problem solving; introductory programming |
Description | Student models are typically evaluated through predicting the correctness of the next answer. This approach is insufficient in the problem-solving context, especially for student models that use performance data beyond binary correctness. We propose more comprehensive methods for validating student models and illustrate them in the context of introductory programming. We demonstrate the insufficiency of the next answer correctness prediction task, as it is neither able to reveal low validity of student models that use just binary correctness, nor does it show increased validity of models that use other performance data. The key message is that the prevalent usage of the next answer correctness for validating student models and binary correctness as the only input to the models is not always warranted and limits the progress in learning analytics. |
Related projects: |