Precision Explainable AI
My research in this area is dedicated to advancing Precision Explainable Artificial Intelligence — systems that model individuals with enough granularity to deliver person-specific insights and recommendations. The challenge is not just prediction, but understanding: stakeholders in education need to know why a model flags a student as at-risk, and those reasons must reflect that student’s unique circumstances rather than population averages. This work prioritizes equity, transparency, and inclusivity while leveraging unobtrusive multimodal data and advanced analytical methods.
Through the development of idiographic AI techniques and person-specific models, I aim to eliminate bias and unfairness by building adaptive solutions grounded in each individual’s own data. At the global XAI level, we identify which variables matter most across learners; at the local level, we explain why a specific prediction was made for a specific student. More recently, we have shown how large language models can automate the generation of natural-language explanations of predictive models, making XAI accessible to non-technical audiences.
While the primary focus is on educational applications, the implications extend to well-being, mental health, and other domains of societal significance. The scalability of these solutions ensures global impact potential through mobile, person-specific AI applications.
Selected Publications
- Explainable Artificial Intelligence in Education: A Tutorial for Identifying the Variables that Matter (2026)
- Individualized Explainable Artificial Intelligence: A Tutorial for Local and Individual Predictions (2026)
- LLMs for Explainable AI: Automating Natural Language Explanations of Predictive Models (2026)
- Automating Individualized Machine Learning Prediction Using AutoML (2026)
- AI, Explainable AI and Evaluative AI: Informed Data-Driven Decision-Making in Education (2026)
- Idiographic Artificial Intelligence to Explain Students' Self-Regulation: Toward Precision Education (2024)
- Why Explainable AI May Not Be Enough: Predictions and Mispredictions in Education (2024)