The Long Road to AI Regulation

Facial recognition debate thwarts EU’s Artificial Intelligence Act

The topic of facial recognition raised its controversial head again last month, when two of the EU’s privacy watchdogs — namely the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) — teamed up to call for a general ban on any artificial intelligence (AI) that identifies people in public spaces. The two bodies also recommended a ban on AI using biometrics to categorize people based on ethnicity, gender, political or sexual orientation, as well as “to infer emotions” of a person.

What makes this public intervention so significant is that it flies in the face of the European Commission’s — and the world’s — first attempts at regulating the technology. The bloc’s draft Artificial Intelligence Act published in April has generated huge discussion about its implications for the development of AI, especially in an enterprise context. It comes hot on the heels of accelerating adoption as well. According to our Senior Leadership IT Investment Survey, 2020, more than 80% of enterprises are now either trialling or in production with AI, up from 55% in 2019, and 58% plan to up their investment in the technology.

The Artificial Intelligence Act proposes a risk-based approach to regulation, focussing on the use of AI rather than the technology itself. Among the many areas it covers, the act classifies facial recognition as a high-risk application, meaning it will be subject to “strict restrictions” but not banned outright, which is what the privacy watchdogs are now backing.

The EDPB and EDPS don’t get to make or change laws, but they are the European Commission’s top advisers on privacy issues, responsible for the application of its privacy rules and overseeing EU institutions’ own compliance with data protection law. Their unequivocal intervention sends a very strong message to the bodies that come up with AI regulation’s final form — namely the European Commission, the European Parliament and the Council of the EU, which represent national governments — that they need to be aligned to make the Artificial Intelligence Act a reality.

Facial recognition remains divisive

The conflicting stances within factions of the EU highlights that there’s still work to be done before we see a unified approach to regulatory safeguards on the use of AI in Europe, which could then set a trend globally as regulators around the world watch Europe’s early positioning in this area. It also reinforces that facial recognition is still one of the most polarizing branches of AI.

On one hand, the technology has witnessed widespread adoption in several areas, for example in our phones and computers, helping to replace passwords; in airports to improve passport checks; and in retail to curb shoplifting.

On the other hand, the use of AI for security and surveillance by law enforcement around the globe has been sounding alarm bells with privacy and civil liberties advocates for several years now, concerned about privacy violations and profiteering from big tech. Over a year ago, Amazon, Microsoft and IBM all halted the sale of the technology to police departments in response to these fears and called for more regulation in the area. While the US Congress has dithers, the world continues to debate this thorny issue.

The impact of the EU’s Artificial Intelligence Act

In my view, the “world first” proposals from the EU are foundational steps in maturing AI in several key areas. Firstly, they include bans in areas seen as a threat to safety, livelihood and human rights, and they call out social scoring applications in the public sector in particular. Built on the foundation set by the General Data Protection Regulation (GDPR), they include hefty fines for violations — up to 6% of global turnover — and importantly, they lay out specific obligations for companies developing high-risk applications. Responsibilities include ensuring high quality of data, free of bias and discrimination; human oversight in design, development and implementation; and ensuring companies keep high standards of documentation for explainability and methods for cybersecurity, for instance.

But there are also some glaring gaps in the proposals. These include a lack of rules on the use of AI in the military, or sustainability guidelines regarding AI’s environmental impact as well as a lack of advice for companies on how to perform conformity assessments. There’s also surprisingly little about how enforcement will work between the EU and national authorities. Much of this additional detail will emerge as the proposals get thrashed out in the European Parliament and at national government levels in the next few years.

By 2023, though, it’s likely that the Artificial Intelligence Act will be a strong catalyst for a rethink in how companies design and deploy AI in several major areas. Firstly, we’ll see much more formalized human oversight of AI within companies, creating new job responsibilities or possibly entirely new job roles and functions like AI compliance officers and machine learning reliability engineers.

Secondly, adoption of tools such as MLOps and life cycle management platforms will rise. We’re already seeing rapid development of tools for bias detection and explainability, which will become mandatory under laws for high-risk applications such as loan scoring, medical devices or self-driving vehicles. However, I believe they’re set to become standard in a wider range of machine learning applications as well, as enterprises begin to compete on how well they adhere to trusted AI principles, including those laid out in the Artificial Intelligence Act.

Although challenges remain in the EU’s pursuit of a product safety standard or seal for AI, like the CE mark, indicating that conformity assessments have been carried out — and most notably that enterprises must self-assess or use private firms for this — enterprises will pay more and more attention to AI safety. And this will be a boon for the likes of Microsoft, IBM, Google Cloud, Amazon Web Services and specialist firms such as Algorithmia and DataRobot, which provide solutions in these areas.

Thirdly, the law will put much more focus on the business and compliance risks in machine learning systems — what the UK Centre for Data Ethics and Innovation called AI Assurance in a recent blog urging an ecosystem of suppliers to develop in the area to support enterprise compliance efforts. A wider range of firms helping enterprises to address AI assurance under the legal certainty of the Artificial Intelligence Act should help the market to mature, especially among laggard sectors like the public sector and small and medium businesses.

The bottom line is that there’s been a lot of talk about ethical and principled AI over the past few years from governments, policy-makers, businesses and tech suppliers, but it has largely taken place in a legal vacuum. Soon it will become clear who’s just been paying lip service to it.

The road ahead

The conflicting proposals from the various factions of the EU on facial recognition will take time to resolve in the long road ahead for AI regulation. But along with the first proposals from the EU, they’re a bold step in the right direction, attempting to examine, question and offer a balanced approach to the development of the technology.

The outcome is likely to set in motion attempts at a global standard — although China remains a big question mark — for not only how surveillance based on biometrics is seen and regulated but, more widely, AI itself. There’s an inevitability to the increased adoption of AI, and as society looks to create a balance between security and privacy, between safeguards and innovation, oversight is going to be a necessary part of the larger bureaucracy.