Building Machines that Learn and Think Like People

MIT Prof. Josh Tenenbaum gave a talk on Building Machines that Learn and Think Like People at the ICML 2018. His insight is that it is possible to teach a machine to learn like a child by using:

  • Game engine intuitive physics
  • Intuitive psychology
  • Probabilistic programs
  • Program induction
  • Program synthesis

This agenda is more ambitious that the current state of machine learning though it resembles more old-style machine learning an there is no guarantee that it will succeed. Still it is refreshing that we can learn from young humans to teach machines.

Imitation Learning

At the ICML 2018 conference there was a very interesting tutorial on Imitation Learning by Yisong Yue and Hoang Le from CalTech. It is quite similar to Reinforcement Learning but with an expert that the machine wants to imitate by inferring a policy that links states to actions. Imitation Learning can be applied to sequential decision making problem made by humans or other algorithms.

There are different categories of Imitation Learning:

  • Behavioral Cloning which is supervised learning on the state-action pairs of the expert
  • Direct Policy Learning (Interactive Imitation Learning) with interaction with an expert
  • Inverse Reinforcement Learning which is reinforcement learning applied to an inferred reward function from demonstrations

Direct Policy Learning can use Sequential Learning Reduction algorithms such as Data Aggregations (DAgger) and Policy Aggregations (SEARN & SMILe).

According to the presenters Imitation Learning seems to be easier to implement that Reinforcement Learning. A limitation is that the machine cannot do better than the expert. The talk is here:

 

Reproducibility, Reusability, and Robustness in Deep Reinforcement Learning

Mc Gill Professor Joelle Pineau has an insightful presentation on reproducibility in machine learning and especially in deep reinforcement learning. This is a general trend in science that some results sometimes cannot be fully reproduced. In deep reinforcement learning, there is a stochastic component to the results such as the present value of future rewards. She observes that results can vary for reasons that should not matter such as picking up a random seed (to generate random variables) and that the implementation of base cases by different researchers can yield different outcomes. Making the code and the data available for other researchers to reproduce paper results could alleviate some of these problems. She has introduced the Reproducibility Challenge that could be adopted by other scientific conferences.

What Can Machine Learning Do? Workforce Implications

This is an economist talk about the implication of machine learning on the workforce by Professor Erik Brynjolfsson at the ICLR 2018 conference. The effect of technology on jobs seems to be very strong on unskilled workers and has been reinforcing inequality. He provides some indications on jobs that will survive AI such as massage therapist and anything that requires human/social interactions.

Natural Language Processing with Deep Learning

Communicating and understanding are usually taken as a sign of intelligence and is part of the Turing test. Indeed, the machine needs to communicate and appear to understand the questions of the interrogator to appear human and pass the test.  Natural Language Processing (NLP) has made great progress in the past 20 years. Stanford has an excellent course (CS224d) on Natural Language Processing with Deep Learning taught by Chris Manning and Richard Socher (now at Salesforce). It is available here:

The course material is also online here.

An interesting criticism of machine translation engines such as Google Translate (and that use techniques taught in these NLP lectures) appears in the article The Shallowness of Google Translate from the Atlantic.

Machine Learning: an Applied Econometric Approach

Susan Athey’s article discussed machine learning and causal inference. The article Machine learning: an applied econometric approach by Harvard Professor Sendhil Mullainathan and Jann Spiess is focused on machine learning as an econometric tool.

Abstract:

Machines are increasingly doing “intelligent” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.

Sendhil also recently gave an interesting talk at the Stanford Center on Global Poverty and Development on applying machine learning to poverty alleviation:

Gradient Boosting Machine Learning

Machine learning has a long list of methods to learn from data. Among them is gradient boosting machine learning as taught here by Professor Trevor Hastie from Stanford University.  In this video, he introduces and compares decision trees, bagging, random forests and boosting.

He has authored an excellent book, The Elements of Statistical Learning than you can download here.