Explainable Artificial Intelligence in Education: A Tutorial for Identifying the Variables that Matter

Mohammed Saqr and Sonsoles López-Pernas
Advanced Learning Analytics Methods, 2026, pp. 135--164

Abstract

Despite the potential of integrating machine learning (ML) and artificial intelligence capabilities into educational settings, there are several challenges that hamper their widespread use and adoption. Among these challenges is that these technologies often function as “opaque-box” models. This lack of transparency can undermine trust, fairness, and accountability. To address this, explainability methods are essential for understanding how models, e.g., predict at-risk students, grade essays, or identify plagiarism. This chapter demonstrates the use of several techniques to explain ML models in educational contexts through a tutorial covering both regression (predicting student grades) and classification (identifying high versus low achievers). We describe how variable-importance measures, partial dependence plots, and accumulated local effects may help educators interpret the outcomes of predictive models, increasing transparency and trust. © 2026 The Editor(s) (if applicable) and The Author(s).

Affiliations

University of Eastern Finland, Joensuu, Finland