Microsoft Seeks Responsible AI

Brussels event highlights company’s strides

A couple of weeks ago, Microsoft gathered experts from academia, civil society, policy making and more to discuss one of the most important topics in tech at the moment: responsible artificial intelligence (AI).

Microsoft’s Data Science and Law Forum in Brussels was the setting for the discussion, which focussed on rules for effective governance of AI.

AI governance and regulation may not be everyone’s cup of tea, but the event covered an array of subjects where it has become a red-hot topic, such as the militarization of AI, liability rules in AI systems, facial recognition technology, the future of quantum computing and more. The event also gave Microsoft an opportunity to showcase its strategy in this important area.

A few highlights are worth sharing, so let’s dig a bit deeper into what Microsoft is doing in responsible AI, why it’s important and what it means for the market.

Responsible AI is now a priority

Responsible AI is a combination of principles, practices and tools that enable businesses to deploy AI technologies in their organizations in an ethical, transparent, secure and accountable manner. The subject has been getting a lot of attention for several reasons.

Firstly, we are seeing more high-profile examples of biased algorithms, autonomous vehicle accidents and privacy-violating facial recognition systems that are increasing the public’s awareness of the dangerous, unintended consequences of AI.

Secondly, enterprises are now beginning to shift early AI projects out of the labs and are considering the real-world risks and responsibilities they will have when deploying AI in their operational processes.

And thirdly, as decision-makers consider introducing data and AI solutions in critical areas such as finance, security, transportation, healthcare and more, concerns are mounting over its ethical use, the potential for bias in data and a lack of interpretability in the technology, as well as the prospect of malicious activity such as adversarial attacks.

For these reasons, the governance of machine learning models has now become a main priority for investment with enterprises. According to CCS Insight’s survey of IT decision-makers, for example, the level of transparency of how systems work and are trained, and the ability of AI systems to ensure data security and privacy are now the two most important requirements when investing in AI and machine learning technology, cited by almost 50% of respondents (see Security, Cloud and Ethics Dominate IT Priorities in 2019).

Tools and Rules for Trustworthy AI

It was against this backdrop that Microsoft’s Corporate, External and Legal Affairs division — an underrated asset in Microsoft’s cloud business against Google and Amazon — hosted an event called Tools and Rules for Responsible AI.

The event was located a stone’s throw from the European Parliament, a symbolic location following the release of the European Commission’s white paper on AI just two weeks earlier. The report represents a critical moment for the AI industry as the first major attempt at outlining potential policy options for regulating the technology within the EU. Among a host of proposals in the white paper, the prime recommendation is a potential regulatory framework that focusses (initially) on sensitive uses of AI, such as in areas that may violate human safety or rights and in high-risk sectors such as healthcare, transportation and energy.

Naturally, much of the discussion at the event was focussed on the commission’s white paper. The consensus was that although a lot more work on the details remains to be done in the coming years, particularly on implementation, the EU is headed in the right direction (the proposals are in a public consultation period until May 2020). Similar to its position on data privacy, which led to the General Data Protection Regulation, the EU’s goal here “is to become the leader in trustworthy AI”, as stated by Didier Reynders, the European Commissioner for Justice in the event’s opening keynote session.

How Microsoft approaches responsible AI

In addition to leading and facilitating the discussion, Microsoft used the event as an opportunity to articulate its approach to responsible AI.

The company has been highly vocal on the topic going as far back as 2016, when CEO Satya Nadella laid out six goals that he believes AI research must follow in order to keep society safe. A year later, Microsoft set up its AI and Ethics in Engineering and Research (AETHER) Committee, a cross-company set of internal working groups tasked with deliberating hard questions about the use of AI and advising Microsoft leadership on its development.

The AETHER advisory committee set the blueprint for the publication in 2018 of Microsoft’s six principles for AI in a book, The Future Computed. The principles, which include fairness, inclusiveness, reliability and safety, transparency, privacy and security and accountability, now guide its end-to-end approach to AI, from development to deployment.

Interestingly, there are now more than 30 sets of similar AI principles in the tech industry, many of which are too high-level and abstract to carry any practical and operational guidelines for customers. But what’s unique about Microsoft’s approach is its focus on practices and implementation as well.

From principles to practices: lessons from Tay bot and facial recognition

In January 2020, Microsoft unveiled its new Office of Responsible AI, an internal group within its Corporate, External and Legal Affairs department, dedicated to putting a set of AI-related ethics and governance principles into practice throughout the company.

The unit has four main responsibilities. Firstly, it sets company-wide policies and practices for the responsible implementation of AI. Secondly, it ensures readiness to adopt practices within Microsoft and supports customers and partners to do the same. Thirdly, it has a case management role in which it reviews and triages sensitive uses for AI to help ensure its principles are upheld. And finally, it has a public policy role, which sees it help shape and advocate responsible AI policies externally through its work with the Partnership on AI, events such as the Data Science and Law Forum and its consultations with the EU and other regulatory bodies worldwide.

Perhaps most importantly, the Office of Responsible AI is a powerful vehicle to derive practices from some of the mistakes Microsoft has made with AI in the past. Natasha Crampton, the head of the division, said that the company’s experiences with its Tay chat bot in 2016 and its facial recognition technology failing to recognize certain skins tones in 2018 have been “instrumental” in shaping governance policies under its new charter. Microsoft has now published responsible bot guidelines as well as transparency notices that communicate the limitations of its facial recognition technology as a result.

The company is also sharing its learnings for responsible AI through its AI Business School, which is becoming a differentiated asset in its AI strategy. Responsible AI is now one of the most popular subjects in the programme. Last month, the school announced that it had expanded the responsible AI module to provide business leaders with insights from AETHER, the Office of Responsible AI and customers such as State Farm on the topic.

Microsoft is leading the responsible AI discussion

Altogether, these efforts reinforce my view that Microsoft is playing a leading role in shaping the discussion about trustworthy AI. Customers I speak to care less about the conceit of algorithmic perfection from an AI supplier. Rather, they want to know above all, that they’re on solid foundations with a responsible provider as they advance their AI strategies. Microsoft’s approach, spanning principles, practices and efforts to shape policy, like its efforts in security, privacy and accessibility before it, is helping differentiate the company as a trusted advisor, which in turn is increasing trust in its cloud business.

Although this sets Microsoft apart from the competition, the road ahead is far from straightforward.

The firm will need to look more closely at offering certification and training programmes in responsible AI. Additionally, an area that understandably received less attention in Brussels, given the event’s focus on policy, was Microsoft’s tools for responsible AI, aimed at data scientists and developers, which are expanding rapidly in Azure Machine Learning. These include MLOps for life cycle management, InterpretML and Fairlearn toolkits for explainability and bias detection as well as data drift monitoring, among others. A cynic could argue that Microsoft’s effort is all about driving business to Azure, where it has a lot to gain from this area. I will explore the competitiveness of these tools as a major aspect of the strategy in an upcoming post.

Although less directly related to AI, Microsoft’s HoloLens product continues to be used by the US military, raising ethical questions in other areas of its business. More importantly, HoloLens faces some very tricky privacy and security challenges in areas like social media provenance, deep fake technology, as well as new products like Workplace Analytics and its custom speech API, which is in gated preview.

These areas will test the case management function of both AETHER and the Office of Responsible AI in the future. In the latter’s case, the Wall Street Journal reported in August 2019 that in 2018, a voice-based deep fake of a major UK company’s CEO emerged and was able to trick a senior employee into wiring more than $240,000 to a criminal bank account. It was the world’s first case of voice fraud, and although the incident does not involve Microsoft’s technology, it’s an astonishing reminder of the unintended effects of custom speech technology, which will call for deeper operational and security guidelines down the road.

It’s early days for responsible AI, but it’s a crucial area to help companies avoid problems and improve the performance and quality of the AI applications they deploy. It’s going to be fascinating to track how customers, Microsoft and rivals respond to these trends over the next 12 months.