Is Interpretability Necessary for Machine Learning?

At the NIPS 2017 conference there was a fascinating debate on the necessity of interpretability in machine learning. Without interpretability, mistakes can be made for instance when correlation is just used as a proxy for causation as Rich Caruana illustrates with a medical example. Yann Lecun on the other hand thinks that it is not necessary, it just needs to work. According to Lecun, people are not really interested in looking into the intimate details of a machine learning model, they just want their models to work. Kilian Weinberger argues that between an interpretable model with high error rate and a non-interpretable model with low error rate, people would choose the latter.

Interpretability is closer to what an economist would require, having a model that can be explained with model parameters that can be estimated and interpreted. If the model cannot be explained there is always a risk of capturing spurious correlations, having an omitted variable bias (when an important explanatory variable is missing) or endogeneity problems (when an explanatory variable is correlated with the error term) and really be wrong. At the same time, for real life applications such as forecasting or medical diagnostics, model accuracy is probably more important.

Without interpretability, there is a risk that a machine learning model will make mistakes that a human with “common sense” would not make. A more serious risk that the actual error rate will be higher on real world data when it is deployed in production (The model is wrong). Economists use interpretable economic models to limit this risk. In the current state of machine learning, there seems to be a trade-off between interpretability and accuracy (or effectiveness). Some promising approaches have been suggested to make machine learning models more interpretable for instance by approximating them by simpler local models such LIME. You can read this post.

More rigorous testing of the models can also be used and sometimes confronting the models with “common sense” or the current state of knowledge of the field can be useful.  In domains where machine ave reached superhuman skills (think of Alpha Go) the latter approach might not however be possible.

We encourage you to follow this debate:

 

Leave a Reply

Your email address will not be published. Required fields are marked *