Precision Explainable AI

My research in this area is dedicated to advancing Precision Explainable Artificial Intelligence — systems that model individuals with enough granularity to deliver person-specific insights and recommendations. The challenge is not just prediction, but understanding: stakeholders in education need to know why a model flags a student as at-risk, and those reasons must reflect that student’s unique circumstances rather than population averages. This work prioritizes equity, transparency, and inclusivity while leveraging unobtrusive multimodal data and advanced analytical methods.

Through the development of idiographic AI techniques and person-specific models, I aim to eliminate bias and unfairness by building adaptive solutions grounded in each individual’s own data. At the global XAI level, we identify which variables matter most across learners; at the local level, we explain why a specific prediction was made for a specific student. More recently, we have shown how large language models can automate the generation of natural-language explanations of predictive models, making XAI accessible to non-technical audiences.

While the primary focus is on educational applications, the implications extend to well-being, mental health, and other domains of societal significance. The scalability of these solutions ensures global impact potential through mobile, person-specific AI applications.

Selected Publications

← Back to Interests Overview