Reproducibility, Reusability, and Robustness in Deep Reinforcement Learning

Mc Gill Professor Joelle Pineau has an insightful presentation on reproducibility in machine learning and especially in deep reinforcement learning. This is a general trend in science that some results sometimes cannot be fully reproduced. In deep reinforcement learning, there is a stochastic component to the results such as the present value of future rewards. She observes that results can vary for reasons that should not matter such as picking up a random seed (to generate random variables) and that the implementation of base cases by different researchers can yield different outcomes. Making the code and the data available for other researchers to reproduce paper results could alleviate some of these problems. She has introduced the Reproducibility Challenge that could be adopted by other scientific conferences.

Is Interpretability Necessary for Machine Learning?

At the NIPS 2017 conference there was a fascinating debate on the necessity of interpretability in machine learning. Without interpretability, mistakes can be made for instance when correlation is just used as a proxy for causation as Rich Caruana illustrates with a medical example. Yann Lecun on the other hand thinks that it is not necessary, it just needs to work. According to Lecun, people are not really interested in looking into the intimate details of a machine learning model, they just want their models to work. Kilian Weinberger argues that between an interpretable model with high error rate and a non-interpretable model with low error rate, people would choose the latter.

Interpretability is closer to what an economist would require, having a model that can be explained with model parameters that can be estimated and interpreted. If the model cannot be explained there is always a risk of capturing spurious correlations, having an omitted variable bias (when an important explanatory variable is missing) or endogeneity problems (when an explanatory variable is correlated with the error term) and really be wrong. At the same time, for real life applications such as forecasting or medical diagnostics, model accuracy is probably more important.

Without interpretability, there is a risk that a machine learning model will make mistakes that a human with “common sense” would not make. A more serious risk that the actual error rate will be higher on real world data when it is deployed in production (The model is wrong). Economists use interpretable economic models to limit this risk. In the current state of machine learning, there seems to be a trade-off between interpretability and accuracy (or effectiveness). Some promising approaches have been suggested to make machine learning models more interpretable for instance by approximating them by simpler local models such LIME. You can read this post.

More rigorous testing of the models can also be used and sometimes confronting the models with “common sense” or the current state of knowledge of the field can be useful.  In domains where machine ave reached superhuman skills (think of Alpha Go) the latter approach might not however be possible.

We encourage you to follow this debate:

 

What Can Machine Learning Do? Workforce Implications

This is an economist talk about the implication of machine learning on the workforce by Professor Erik Brynjolfsson at the ICLR 2018 conference. The effect of technology on jobs seems to be very strong on unskilled workers and has been reinforcing inequality. He provides some indications on jobs that will survive AI such as massage therapist and anything that requires human/social interactions.