The UK House of Lords recently published a report on “the economic, ethical and social implications of advances in artificial intelligence.” It suggested an AI Code to reassure the public that AI will not undermine it. The principles are:
(1) Artificial intelligence should be developed for the common good and benefit of humanity.
(2) Artificial intelligence should operate on principles of intelligibility and fairness.
(3) Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
(4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
(5) The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
This reminds us of course of Asimov’s Three Laws of Robotics:
(1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
(2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
(3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
We believe that it is only the beginning of our reflections on how to regulate AI. There is already some work on legal liabilities of AI. You can read this interesting paper on Artificial Intelligence and Legal Liability.