What Can Machine Learning Do? Workforce Implications

This is an economist talk about the implication of machine learning on the workforce by Professor Erik Brynjolfsson at the ICLR 2018 conference. The effect of technology on jobs seems to be very strong on unskilled workers and has been reinforcing inequality. He provides some indications on jobs that will survive AI such as massage therapist and anything that requires human/social interactions.

Machine Learning: an Applied Econometric Approach

Susan Athey’s article discussed machine learning and causal inference. The article Machine learning: an applied econometric approach by Harvard Professor Sendhil Mullainathan and Jann Spiess is focused on machine learning as an econometric tool.

Abstract:

Machines are increasingly doing “intelligent” things. Face recognition algorithms use a large dataset of photos labeled as having a face or not to estimate a function that predicts the presence y of a face from pixels x. This similarity to econometrics raises questions: How do these new empirical tools fit with what we know? As empirical economists, how can we use them? We present a way of thinking about machine learning that gives it its own place in the econometric toolbox. Machine learning not only provides new tools, it solves a different problem. Specifically, machine learning revolves around the problem of prediction, while many economic applications revolve around parameter estimation. So applying machine learning to economics requires finding relevant tasks. Machine learning algorithms are now technically easy to use: you can download convenient packages in R or Python. This also raises the risk that the algorithms are applied naively or their output is misinterpreted. We hope to make them conceptually easier to use by providing a crisper understanding of how these algorithms work, where they excel, and where they can stumble—and thus where they can be most usefully applied.

Sendhil also recently gave an interesting talk at the Stanford Center on Global Poverty and Development on applying machine learning to poverty alleviation:

Machina Economicus

This interesting 2015 paper by Professors Parkes (Harvard) and Wellman (U. of Michigan) discusses some possible synthesis of economic reasoning and artificial intelligence. The abstract:

The field of artificial intelligence (AI) strives to build rational agents, capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics.
We review progress towards creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people.

Indeed, economics often assumes rational economic reasoning (e.g. maximize some utility function under some income constraints) as a  good first approximation of human behavior and AI agents could be the closest to these idealized rational agents. In AI however, the agents are taking inputs and produce outputs without optimizing a utility function. They do minimize a loss function when they try to fit a machine learning model but once in production they mechanically use trained models to produce outcomes that the agent designer (a human) has ordered. The utility function of the agent inherited from the designer could be as irrational as the utility function of a human being. For instance, AI agents could easily reproduce the tragedy of the commons if the designer only optimizes the individual AI agent strategy and ignores the negative externalities.