Individualized Explainable Artificial Intelligence: A Tutorial For Identifying Local and Individual Predictions

Mohammed Saqr and Sonsoles López-Pernas
Advanced Learning Analytics Methods, 2026, pp. 165--187

Abstract

In the context of explainable artificial intelligence, global explanations provide aggregate insights about the performance and factors influencing a machine learning model, as we have seen in the previous chapter. However, local explanations are needed to understand the factors influencing a specific decision that affect an individual. For instance, local explanations could help teachers understand why a certain student was flagged as at-risk of dropping out a course, fostering transparency and trust. This chapter highlights the need for local explanations in educational contexts and explores three key techniques: Break Down plots, SHAP (SHapley Additive exPlanations), and LIME (Local Interpretable Model-agnostic Explanations)—implemented. Practical examples demonstrate how these methods address prediction interpretability, identify critical features, and support targeted interventions © 2026 The Editor(s) (if applicable) and The Author(s).

Affiliations

University of Eastern Finland, Joensuu, Finland