Categories

35th International Conference on Machine Learning (ICML), Stockholm

35th International Conference on Machine Learning (ICML), Stockholm

IBM’s Richard Tomsett will be presenting the paper “Interpretable to whom? A role-based model for analyzing interpretable machine learning systems” at the Workshop on Human Interpretability in Machine Learning 2018.

The paper is co-authored by the CSRI’s Prof Alun Preece, Research Assistant Dan Harborne, and IBM’s Dave Braines, who is also a CSRI PhD student.

Despite high-profile breakthroughs in machine learning in recent years, there are widespread concerns about machine learning systems being “black boxes”, with limited abilities to explain their outputs and therefore unable to be trusted by users. The machine learning community refers to this problem as “interpretability” and the Workshop on Human Interpretability in Machine Learning (WHI) has emerged in recent years as a key forum for researchers and practitioners in this area.

In the paper at the 2018 WHI workshop, it is argued that a machine learning system’s interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. A model is presented whish is intended to help answer this question, by identifying different roles that agents (humans or machines) can fulfil in relation to a machine learning system. The use of the model is illustrated in a variety of scenarios, exploring how an agent’s role influences its goals, and the implications for defining interpretability. Finally, the paper suggests how the model could be useful to interpretability researchers, system developers, and regulatory bodies auditing machine learning systems.

https://sites.google.com/view/whi2018/home
https://arxiv.org/abs/1806.07552

This work is part of the DAIS ITA project at CSRI.

Linguistic and Cognitive Approaches to Dialog Agents Workshop, Sweden

Linguistic and Cognitive Approaches to Dialog Agents Workshop, Sweden