Hardening AI Security: The New Enterprise Imperative

New framework aims to help ward off machine learning threats

As artificial intelligence (AI) comes of age in the enterprise world, the challenges customers face with the technology are also changing.

Several years ago, challenges with prepping data, skills and applying AI to specific uses were the dominant headaches, but new topics are coming to the forefront, particularly as business leaders start to think more about the risks stemming from greater operationalization of the technology. Among them are liability, compliance, ethics and above all, security. According to our latest senior leadership IT investment survey, security is now the biggest hurdle companies face with AI systems, cited by 30% of respondents (see Five Highlights from Our C-Suite IT Investment Survey 2020).

This is why a recent announcement by Microsoft and non-profit MITRE is particularly relevant. The two organizations, in collaboration with 11 others including Airbus, Bosch, IBM and Nvidia, released an Adversarial ML Threat Matrix, an industry-focussed open framework to help security analysts detect and respond to threats against machine learning systems.

MITRE was founded in 1958 in the US and works in the public interest with federal, state and local governments, as well as industry and academia. It brings innovative ideas into a broad mix of areas, such as AI, intuitive data science, quantum information science, health informatics, space security, policy and economic expertise, trustworthy autonomy, cyberthreat intelligence and cyber resilience.

In 2015, MITRE released its ATT&CK framework, which stands for Adversarial Tactics, Techniques and Common Knowledge. This is a comprehensive knowledge base of tactics and techniques that cyberattackers use, gathered from real observations of this malicious behaviour. Using the data in the knowledge base, cyberdefence teams can review and contrast attacker activity, and plan the best options to fight it. The framework is free and open to everyone.

The framework aims to get organizations thinking holistically about emerging issues in securing machine learning systems. Much of today’s focus on security and machine learning involves customers applying AI technology to cybersecurity. But attention is now turning not only to the security of infrastructure but of AI itself.

In the past few years, commercial machine learning systems around the globe have been the target of a small but rising number of attacks, highlighting the growing need to safeguard machine learning models, ensure algorithms are robust and have threat detection and monitoring solutions that help protect against trojans, model inversions, spoofing and adversarial attacks, for example.

Microsoft surveyed 28 major businesses, finding that almost all are still unaware of the threat of adversarial machine learning. And 25 of them said they don’t have the right tools in place to secure their machine learning systems. This is why one of the predictions I revealed at our Predictions Week event is that the security of machine learning will become the main priority for investment in AI by 2022 (if you missed the event, you can watch my session here).

The new Adversarial ML Threat Matrix, which builds on ATT&CK, helps organizations toward this goal, looking to arm security teams against attacks on machine learning systems. It catalogues several vulnerabilities and adversary behaviours spotted by Microsoft and MITRE over the years, as well as a whole host of Microsoft expertise in the security sector. The matrix helps bolster monitoring strategies for companies’ mission-critical systems.

Microsoft and MITRE say they’ll ask for contributions from the community through GitHub, where the threat matrix is now available. Researchers can submit studies detailing exploits that compromise the confidentiality, integrity or availability of machine learning systems running on Amazon Web Services, Microsoft Azure, Google Cloud AI or IBM Watson, or embedded in client or edge devices. Those who submit research will retain the permission to share their work.

These early initiatives help companies build trust in AI and lower some of the business risk that comes with a greater operational dependence on the technology. As edge computing, 5G networks and the Internet of things all quicken the development of AI, it’s important that the market tackles security threats to the technology. Companies can use the Adversarial ML Threat Matrix framework, for example, to test the resilience of their AI models by simulating realistic attack scenarios. Security professionals can use it to familiarize themselves with the kind of threats their organizations’ systems could face in the not-so-distant future.

One area here is tracking the rise of self-learning malware, which we predict will cause a major security breach within the next three to four years. The first forms of self-learning, adaptive malware arrived in 2018 but were largely confined to lab environments and hackathons. Today, it’s becoming increasingly common to build adversarial networks for testing purposes and to harden intelligent security systems to see if self-learning systems can dodge detection. Given the democratization of AI tools and the wealth of innovation now available in open source, the emergence of fully-fledged malware powered by AI that can learn, spread and evolve its methods to avoid detection in the wild, as opposed to lab environments, is inevitable.

For this and many other reasons, the Adversarial ML Threat Matrix should serve as a good educational resource for the cybersecurity community. And we’ll see more similar innovation in this field as new technologies join the scene, such as homomorphic encryption, confidential computing and machine learning, federated learning and differential privacy, helping enterprises navigate the critical intersection of innovation and trust.

It will be fascinating to see how the security community responds.