Data Science and Law Forum shows Microsoft’s strong AI stance
I recently attended the Data Science and Law Forum in Brussels, the second run of a Microsoft event first held 18 months ago (see Artificial Intelligence: The Only Way Is Ethics). Microsoft again went for an unusual format, deciding not to make it a marketing event for the company and instead taking the role of facilitator. Making it clear that it doesn’t have all the answers, Microsoft said it wants to help the world find its way with artificial intelligence (AI) ethics and regulation.
In setting up the event this way, Microsoft demonstrated impressive levels of trust from the European Commission, as well as great reach in the institutions and stakeholders of the EU.
European Commission’s white paper a big theme
The biggest talking point of the event was the commission’s recently released White Paper on Artificial Intelligence: a European Approach to Excellence and Trust. This is a consultation document that paves the way for European regulation of AI and aims to provide a framework within which the commission can help Europe catch up with the technology. Europe is a long way behind the US and China in AI research, in the building of home-grown AI champions like Google and Microsoft and in the AI start-up ecosystem.
Adoption of AI in industries in Europe is also behind the US. Our recent IT Decision-Maker Workplace Technology Survey 2019 showed that 35% of companies in the US are using AI in a production environment, compared with 18% in Europe.
The report looks set to generate plenty of discussion and it raises many deep questions including:
- How should we define AI for regulatory purposes?
- Can regulation really help Europe to catch up?
- Will regulation in practice favour bigger players, which have more resources to invest in compliance, as was the case with the General Data Protection Regulation (GDPR)?
- Given the very broad scope of AI usage and the speed of AI development, how realistic is it to try to develop all-encompassing regulation?
- Could it work better to have a few broad principles in place, and then develop regulation incrementally in a series of policy “sprints”?
A key aspect of the white paper is a risk-based approach that focusses on the highest-risk areas first, recognizing that it won’t be feasible to do everything at once, and that public authorities need to grow into their roles with AI over time.
Facial recognition is the highest-risk area today
At present, facial recognition technology has the highest level of risk. This technology can be used in three main ways: for verification (such as in automatic gates at airport immigration), for identification (in body-worn cameras for police, for example) and for surveillance. The latter has become a lightning rod for legal challenges and test cases, with Microsoft calling for regulation, and a legal challenge, backed by human rights organization Liberty, brought against South Wales Police as a prominent example. Other stories highlighting challenges facing the technology include Clearview AI scraping billions of facial images from social networking sites to power its real-time surveillance database, and a GDPR fine levied against a high school in Skelleftea, Sweden, after it trialled facial scans for tracking class attendance without the appropriate legal basis.
Facial recognition has such a high profile at present because:
- It works with highly sensitive biometric data.
- It’s often linked to other databases, and can then be used as part of profiling.
- It affects several fundamental rights, going way beyond just people’s privacy.
- The technical systems have not been good enough so far.
The problems with the technology fall in several areas. Some of the cameras used, such as police body-worn cameras, are not of a high-enough quality to give reliable results. The technology deployed by some camera manufacturers is optimized for light-skinned faces. The quality of the results is heavily dependent on lighting conditions. There have been significant problems with bias in the training data sets, with specific challenges where the different variables intersect, such as women of colour. As a result, facial scanning has given rise to high rates of false positives and false negatives. Lastly, those using the technology aren’t always clear about what happens to the data collected after they have used it. Clearly, in areas like policing, all these challenges need to be addressed.
The strong scrutiny that facial recognition is under is good news for focussing the policy debate and for developing legal approaches.
Society needs to trust AI before we’ll use it in a big way
Another major point in the report is how to build enough trust in AI among users to enable a healthy market to develop. In our survey of IT decision-makers, lack of trust emerged as the biggest barrier to adoption of AI. According to the European Commission, the cornerstones of a flourishing market are based on product safety and the liability regime.
In product safety, there are several issues ranging from new risks to lack of legal certainty. New risks include safety of autonomous vehicles; how to ensure the correct functioning of AI used in insurance, legal or government decisions; how to build adequate cybersecurity into AI-based systems; and whether people change their behaviour in public when they know surveillance is in place.
Many product markets already have specific safety requirements such as crash-testing of cars, certificates of airworthiness for planes and so on, and many of these will need to be updated to take account of AI built into the products. Strengthening product safety requirements will help create legal certainty, which in turn will give suppliers more confidence in serving European markets with AI-based products.
Microsoft emphasized that regulations are required but not sufficient. There’s a lot beyond complying with regulations that suppliers can and should be doing. It argued that this isn’t just altruism; it creates much more trust in the technology and will enable faster adoption.
This mentality underpins the creation of Microsoft’s Office of Responsible AI and internal practices. For example, 18 months ago, the company announced Datasheets for Datasets as a way of providing good information about data sets used for training AI systems — a bit like a leaflet enclosed in a pack of pills. This approach is now used throughout Microsoft and could be employed in the whole industry. Microsoft’s marketing of AI services now includes, for the first time, information about the limitations of the services, not just about their features.
The liability regime is another important aspect in building user trust and providing regulatory certainty for suppliers. It was especially interesting to see at the event just how little agreement there was on this among participants at the event. The main hurdle here is that the burden of proof for liability currently falls on users. Many people believe this to be unrealistic and unfair because AI systems are extremely complex, often opaque, and built on data that users don’t have access to. Most users are in no position to understand how the system even works, let alone prove liability. If an autonomous taxi had an accident, where would the passenger start in bringing a liability claim?
A suggestion for overcoming this challenge is to separate proving blame from paying compensation. This could mean that users would be compensated easily by sellers when things go wrong, just like taking a faulty product back to the shop for a refund. In the case of an accident involving an autonomous vehicle, compensation would be paid by the vehicle supplier’s insurance, leaving the various people in the supply chain to work out between them whose fault it was.
Interestingly, a spokesperson for the insurance industry took a completely different view, arguing that the existing system has effective checks and balances built in, and that the current product liability directive is fit for purpose in the AI era. He argued that no-fault liability would add an unnecessary layer of insurance, with risks that are hard to quantify and price. This thinking rests on an assumption of a strong level of product safety legislation, standards, tests, norms and labelling — none of which yet exists in AI. So, the spokesperson argued that the majority of effort should go into those areas as the highest priority.
An aspect of trust that didn’t surface at the event was a professional qualification for AI practitioners, similar to the qualifications and accreditation for doctors, dentists, pilots, architects, lawyers and others. Although I applaud the efforts on product safety and liability, it seems to me that companies developing AI for use in products and services are a major link in the chain. This is especially true as there’s a strong push to expand the pool of AI developers — the so-called democratization of AI — by enabling more people to use the technology, including citizen developers.
Other industries provide a useful parallel here:
- Drug testing and certification within pharmaceutical companies is a strong regime, but certain drugs can only be prescribed by a doctor.
- Aircraft have a certificate of airworthiness, but are only allowed to be flown by people who have a pilot licence.
- Cars are certified for safety, but are only allowed to be driven by people who hold a driving licence.
In my opinion, no amount of product safety regulation and sound liability regime will protect society from bad AI, developed by incompetent or malicious developers. The professional qualification will be an essential component in the longer term, so it’s surprising to see so little focus on it today. It may be that the initiative for this comes from developers themselves, looking to protect their own reputation.
What’s next?
The European Commission sees the development of regulation in AI as a powerful benefit to enabling growth in the markets involved. Others may view it as a threat and, of course, most regulation comes with sanctions. At the event, CCS Insight asked what can be done now to prepare for future European regulation in AI. Werner Stengg, a member of the commission’s cabinet which is responsible to ensure Europe is fit for the digital age, replied that companies should “make sure their business model is decent and reflects the rights of citizens”. This clear sideswipe at Facebook and Google highlighted the duality of the topic nicely.
The European Commission plans to digest consultation responses from late May 2020 and then produce a proposal for regulation, currently expected in the first quarter of 2021. The development of the actual regulation will follow, and an implementation period is expected, similar to what happened with the GDPR. Overall, this means that any new regulation may not start to be effective before 2024 to 2025.
Given that timescale, there’s clearly room for the Data Science and Law Forum to run as a useful event every 12 to 18 months for at least the next five years. Good progress is being made, both within AI suppliers and within institutions. But there’s a long way to go before markets will be properly set up to work with AI, and before users can adopt the technology with the same confidence they have in other product areas.