Artificial Intelligence is the new PC

Artificial Intelligence is making breakthrough advancements in image recognition, natural language processing, robotics, and machine learning in general. The company DeepMind, a leader in AI research, has created AlphaZero in 2018, an AI program that has reached superhuman performance in many games including the game of Go, and more recently in 2020, AlphaFold that solved a protein folding problem that has preoccupied researchers for 50 years. For Stanford Professor Andrew Ng, AI is the new electricity.

A more useful comparison would be the advent of the personal computer, in particular, the IBM PC in 1981, and its first killer application, the spreadsheet software Lotus 123. IBM didn’t introduce the first computer. Home computers were already available for hobbyists since 1977 from companies such as Commodore, Tandy, and Apple. The Apple II with Visicalc was especially already very popular but the IBM PC was the first affordable personal computer enthusiastically adopted by the business community.  


Figure 1. IBM PC

Figure 2. Lotus 123

The novel spreadsheet software allowed flexible free-form calculations, the automation of calculations, the use of custom functions, graphics, references, and data management. Excel, the dominant spreadsheet software is still in use more than thirty years after its first introduction (with more functionalities). Before the spreadsheets, people used calculators and reported results on paper. More intensive calculations were done with mainframe computers in a language such as FORTRAN and results were printed on paper.

Today, AI is the new PC. Not adopting AI is like forgoing the PC in 1981. The impact is already very profound among the native digital companies and should be as significant for the rest of the companies.

Today, business leaders need to think about an AI strategy as they have to think about their information technology strategy. Like the PC and the spreadsheet, they should expect all their employees to become at some point users of AI at work. As the home computer, AI is already present at home with personal assistants such as Amazon Alexa, on phones with Apple Siri, and the internet with Google. All these AI applications are now possible thanks to increasing computer power, the development of the cloud, the availability of big data, and the new machine deep learning paradigm.

The AI Strategy Handbook was written to help you adopt AI in your business strategy so that it creates a long-term sustainable competitive advantage for your customers, your company, your employees, and your investors.

Building Machines that Learn and Think Like People

MIT Prof. Josh Tenenbaum gave a talk on Building Machines that Learn and Think Like People at the ICML 2018. His insight is that it is possible to teach a machine to learn like a child by using:

  • Game engine intuitive physics
  • Intuitive psychology
  • Probabilistic programs
  • Program induction
  • Program synthesis

This agenda is more ambitious that the current state of machine learning though it resembles more old-style machine learning an there is no guarantee that it will succeed. Still it is refreshing that we can learn from young humans to teach machines.

What Can Machine Learning Do? Workforce Implications

This is an economist talk about the implication of machine learning on the workforce by Professor Erik Brynjolfsson at the ICLR 2018 conference. The effect of technology on jobs seems to be very strong on unskilled workers and has been reinforcing inequality. He provides some indications on jobs that will survive AI such as massage therapist and anything that requires human/social interactions.

Machina Economicus

This interesting 2015 paper by Professors Parkes (Harvard) and Wellman (U. of Michigan) discusses some possible synthesis of economic reasoning and artificial intelligence. The abstract:

The field of artificial intelligence (AI) strives to build rational agents, capable of perceiving the world around them and taking actions to advance specified goals. Put another way, AI researchers aim to construct a synthetic homo economicus, the mythical perfectly rational agent of neoclassical economics.
We review progress towards creating this new species of machine, machina economicus, and discuss some challenges in designing AIs that can reason effectively in economic contexts. Supposing that AI succeeds in this quest, or at least comes close enough that it is useful to think about AIs in rationalistic terms, we ask how to design the rules of interaction in multi-agent systems that come to represent an economy of AIs. Theories of normative design from economics may prove more relevant for artificial agents than human agents, with AIs that better respect idealized assumptions of rationality than people, interacting through novel rules and incentive systems quite distinct from those tailored for people.

Indeed, economics often assumes rational economic reasoning (e.g. maximize some utility function under some income constraints) as a  good first approximation of human behavior and AI agents could be the closest to these idealized rational agents. In AI however, the agents are taking inputs and produce outputs without optimizing a utility function. They do minimize a loss function when they try to fit a machine learning model but once in production they mechanically use trained models to produce outcomes that the agent designer (a human) has ordered. The utility function of the agent inherited from the designer could be as irrational as the utility function of a human being. For instance, AI agents could easily reproduce the tragedy of the commons if the designer only optimizes the individual AI agent strategy and ignores the negative externalities.

AI Code

The UK House of Lords recently published a report on “the economic, ethical and social implications of advances in artificial intelligence.” It suggested an AI Code to reassure the public that AI will not undermine it. The principles are:

(1) Artificial intelligence should be developed for the common good and benefit of humanity.

(2) Artificial intelligence should operate on principles of intelligibility and fairness.

(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.

(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

This reminds us of course of Asimov’s Three Laws of Robotics:

(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

We believe that it is only the beginning of our reflections on how to regulate AI. There is already some work on legal liabilities of AI. You can read this interesting paper on Artificial Intelligence and Legal Liability.

Computing Machinery and Intelligence

One of the most seminal papers written on artificial intelligence was by Alan Turing in 1950. The paper describes the famous Turing test to determine if machines can think. We encourage you to read it.

Alan Turing calls it the Imitation Game. It involves three parties A, B and C. A is the machine, B is a human and C interacts with both A and B by text to figure out if A is a human or a machine and B is helping C. If the probability of success of C does not change whether A is a human or machine then Turing suggests that “machines can think”.

This indirect approach has the advantage of being more objective that directly addressing the question of whether a machine can think. The disadvantage is that it does not differentiate between pretending to think and thinking. A very good imitation could win the imitation game!

It was also mentioned in the movie The Imitation Game:

https://www.youtube.com/watch?v=IwVzwsam1NM

We feel that the question has now be answered at least in some specific  domains such as games (see AlphaGo). It is now demonstrated that computers can be better than humans. It would be strange to still argue that computers do not think when hard thinking humans cannot beat them in such intellectual tasks.

Some confusion arises when thinking and consciousness are deemed to be equivalent. Turing cites the objection of a Professor Jefferson:

[The Argument from Consciousness] This argument is very, well expressed in Professor Jefferson’s Lister Oration for 1949, from which I quote. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it.
No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”

If we consider birds it is clear that they are thinking creatures. We do not know if they are conscious though some researchers believe that consciousness is not restricted to humans (see The Cambridge Declaration of Consciousness).

Now if we compare AlphaGo to a bird, it is easier to conclude that AlphaGo is thinking as much as a bird and is even “smarter” than a bird in many domains. We do not need to investigate if AlphaGo has a conscience.

On the last point we note that some recent research of the Theory of the Mind seems to give a machine the ability to represent mental states of others including their desires, beliefs and intentions. It might be possible for the machine to apply the same model to itself. This will make the machine close to being conscious. This will be subject of another post.