We Consider the Ramifications of Cognitive Software
When cars first appeared, governments were concerned about the potential negative aspects of the invention. The Red Flag Act was passed in 1865 in the UK, and other countries enacted similar laws. The Act imposed a speed limit of 2 mph in cities and required a man with a red flag to walk 60 yards in front of the vehicle. As we advance into the era of autonomous vehicles, the world has turned its attention to the ethical issues of artificial intelligence and what regulation, if any, is appropriate.
I recently attended the Microsoft-hosted Data Science and Law Forum in Brussels, aimed at bringing together interested parties to discuss legal and ethics issues in artificial intelligence. It was the first such session organised by Microsoft and it joins several other forums around the world that are focusing on this issue. It was attended by a broad mix of suppliers, EU officials, regulators, academics, policy organizations, practitioners and consultants.
Progress in artificial intelligence was summed up by Chris Bishop, Senior Fellow and Director of Microsoft Research Lab in Cambridge, UK. He said that true artificial intelligence — superior general intelligence, as depicted in films — is still a long way off. However, we are at a singular point in software development: we are using software to create highly complex machine learning models that humans could not code alone, introducing new capabilities with increasing speed. In turn this means that the pace of development is now set by our access to skilled people and by access to data to train the models, rather than by our capacity to write code. Globally the volume of data is roughly doubling every year, and this is giving a new form of exponential development in technology, which used to be provided by Moore’s Law.
As we head onto this new development ramp, we are running into many ethical and legal issues that need to be addressed as artificial intelligence is increasingly used in society. Microsoft’s stance here is to act as a facilitator: it candidly admits it does not have all the answers.
The ethical, legal and policy issues in artificial intelligence are broad and deep. They have become important given the speed of technical development in the area, and the push by many suppliers to democratise the technology. The event in Brussels raised and discussed many central questions, although few answers exist at this stage. Here I highlight the main themes that emerged.
Societal Issues
The first theme was the issues we face with artificial intelligence as a society. They include bias, transparency, explainability, trust and the impact on jobs. Each of these is a big area in its own right, with a full discussion beyond the scope of this summary. Several also raise further questions. One example is that humans are biased in their decision-making, often in highly unpredictable ways, but is artificial intelligence systematically worse? Perhaps we will get to a stage where we prefer software-based decisions for some things because they’re more predictable and less biased.
Developing adequate trust in the technology will depend on high levels of security and privacy, as well as providing ways to explain how decisions are reached. We will also need some established norms for people to know when artificial intelligence is being used, and for querying the decisions made.
Because of bias and other societal issues, there was strong agreement in forum discussions that it is important to use multidisciplinary teams for artificial intelligence projects, so that problems are spotted more easily. One data scientist at the event noted that “You should never let a data scientist work on their own” because the broader team input provides essential context to minimise bias.
Another example of the questions raised stems from the fact that many people are happy to give blood, but without any explanation of what happens to their DNA information as a result. Why do we trust this system so much, yet have lower trust in an artificial intelligence system? Similarly, would we trust a junior doctor examining an X-ray more than a software-based assessment of the same image?
Early Stages of Development
The second theme is that we are still in the early stages of the development of artificial intelligence as a product. With many straightforward product categories, such as shampoo for example, a new version will emerge from the lab when novel features are technically sound. At that point other departments swing into action, including product management, marketing, legal and compliance, finance, production and sales. Each needs a certain amount of time before it is ready to support the new product fully and only then can the product be launched. In contrast, artificial intelligence today is dominated by academia, research and technical development. New products are pushed immediately from the “lab” into the open-source community for immediate download and use by anyone in the world. None of the other functions exist yet.
That might not matter if artificial intelligence was a completely benign and harmless product. But some uses of the technology belong to a group of products that have the potential to wreak large changes on humanity, both good and bad. This group includes oil, cars, aeroplanes, nuclear power stations, financial services and drugs, all of which are heavily regulated.
So, if regulation might be appropriate, should we have horizontal regulation set at national or regional levels? Or would it be better for each sector to organize its own regulation of the ways in which artificial intelligence is used in that industry? Or a mix of the two?
Similarly, how should we accommodate artificial intelligence into existing legislation? In Europe, the General Data Protection Regulation (GDPR) attempts to do this to some extent, but other areas will need updates. These include laws governing competition, contracts, consumer protection and many other legal areas.
Gap between Research and Practice
A follow-on theme from the early state of the technology was that there is a big gap between the artificial intelligence products available through the open-source community and the needs of practitioners. Key aspects of this include the quality of data sets; the quality of data scientists; how to approach security with artificial intelligence as a new attack surface; how to approach compliance rules with models that are neither transparent nor explainable; what new compliance rules are sensible? Should products that are used in public services, such as court hearings and in medicine, be obliged to provide a greater degree of openness than artificial intelligence used commercially between two companies?
In this context, one initiative proposed by Microsoft is “data sheets for data sets” so that users can see the provenance, applicability and shortcomings of each. This is a bit like the sheet of paper found inside every box of tablets from a pharmacy, or the labels on food products. The idea is gaining support and may well be extended to include application programming interfaces to artificial intelligence services and pre-built neural network models.
Another area receiving attention is technical accreditation for data scientists and those who write artificial intelligence software. People are not allowed to practise as a surgeon or a pilot without extensive training and certification. And they are barred from working if they are shown to be malicious or incompetent. What is appropriate for artificial intelligence skills and practitioners?
The issues raised are complicated and cut across many areas of society and our legal frameworks. They collide with cultural norms and practices, so we can expect significant variation across countries in what is acceptable. We do not need a man with a red flag and a walking-pace speed limit, but the risk for suppliers is that horizontal regulation imposed by governments will tend to be cautious and will slow innovation.
The best way for the technology industry to avoid this outcome is to drive the regulatory and ethical agenda in a responsible way itself. That is why Microsoft is to be applauded for organizing events such as this. It is also why all companies that claim to be at the forefront of developing artificial intelligence need to be strongly engaged in governmental and societal bodies.