How to Get Past AI’s Barriers to Entry

Turn artificial intelligence into a core competency

If your business is already experimenting with artificial intelligence (AI), you’re in good company. Our global survey of IT decision-makers shows that two-thirds of businesses are researching, testing or already using machine learning in their business.

Yet, fewer companies have figured out how to embed AI into business processes. By my estimates, just 10% of businesses have made the leap from dabbling in AI to building a business-wide strategy around it, moving AI beyond the limited world of academics and data scientists and into the hands of “front line” workers. That leaves many companies that have yet to create governance of AI and manage it as a core competency, just like finance and risk management.

But as we’ve found when talking to companies about adoption of AI, governance can be a rocky road, especially in a field where technology and best practices are still maturing. By getting ahead of the governance challenges, which we outline in more detail below, this transition won’t be so rocky, which means that AI can start to solve real problems, faster.

Systems Design

This is the function that businesses are worried about first and foremost, as it’s tied to issues of bias and trust. In our survey, IT decision-makers named trust in AI systems as the biggest barrier to the implementation of the technology. Companies are clearly worried about improving the explainability and transparency of AI so that it’s seen as a day-to-day business tool and not a black box that operates under a shroud of mystery.

I see two solutions for better governance of systems design. Firstly, embed trust and transparency into your design process when building out AI. The AI companies we track at CCS Insight are increasingly looking to create tools that help business users understand how algorithms generate their findings. The user-friendly monitoring features can alert people to problems with a model’s performance or accuracy; that way, workers can explain to colleagues and customers what’s going on under the hood.

Secondly, as trust is a function of culture, make efforts to encourage employees at every level to use AI — and therefore, implement processes and systems that contribute to transparency and explainability. Online retailers tend to be good at this: they are data-driven both externally (building customer relationships) and internally (improving supply chain operations). The level of transparency and explainability required will also depend on how mission-critical the use of AI is. For example, applications that support financial decisions such as fraud or lending will require more transparency than, say, product recommendations in retail.

Compliance

This is a business function that thrives on processes and checklists that can streamline audits. Companies want AI to fit into their existing compliance processes, but its immaturity is a barrier. How do you prove you are protecting AI data, and that AI systems are undergoing the same audit checks as other IT systems? How do you demonstrate that you are monitoring algorithms for potential bias, especially as training data is changing all the time?

The compliance bumps in the road will become easier to address when AI regulatory standards become more commonplace. For example, in mid-April 2019, the US Food and Drug Administration proposed a new regulatory framework for AI-driven medical devices; we can assume more such standards are on the way.

Another problem-solver for compliance: pushing cloud suppliers to become compliant. Businesses can go to cloud-based companies specializing in quality data sets and AI models, instead of hiring expensive data scientists. As they do so, business leaders can ease their own compliance burdens by ensuring cloud providers have ticked all the boxes on their compliance checklists.

Until regulations catch up, it’s important to think of compliance not as stifling innovation or slowing down the progress of your AI projects. Instead, think of it as a critical process for improving performance, execution and above all, quality assurance needed when operationalizing AI in your organization.

Security

As more business adopt AI, cyber-attackers will see value in disrupting AI systems for fun and profit. Imagine the impact of malicious data added stealthily to a model to alter predictions, or hackers fooling a self-driving car into thinking there were extra lanes of traffic or no stop signs on a road.

Most companies have (or should have) governance policies relating to data protection. But they’ll need to think beyond just keeping data safe: they’ll have to protect models as well. Companies that have already made AI part of their operations are experimenting with ways to build secure infrastructure around it. Executives of a bank told me they’re considering running their AI models on encrypted data, a process that’s becoming more feasible.

I predict we’ll see more security instruments for AI that govern access, monitor performance and scan systems to protect from malicious attacks.

Privacy

This is another governance pillar that keeps business leaders up at night. In our survey, the ability to ensure data security and privacy was named the most important IT consideration for investment in AI and machine learning.

The General Data Protection Regulation (GDPR) will hasten discussions about privacy protections for AI data, because it requires data to be processed in a transparent manner, requires businesses to inform people if their data has been breached, and requires letting users opt out of using their data to train AI systems.

But GDPR is also ambiguous when it comes to other aspects such as automated decision-making, the right to explainability and requesting data to be deleted from machine learning models. Until there’s clear direction, data science teams will need to be cognizant of the privacy and legal risks.

What I’m hearing from large technology companies, which are usually further along the AI maturity curve, is that they are starting to spell out AI-related opt-in policies more clearly. To properly govern the use of private data for AI — and to avoid running afoul of data privacy regulations — businesses should be explicit about whether and when customers’ data is used to train models.

Time to Get Off the Sidelines

When do you shift from experimentation to implementing AI, and start overcoming governance hurdles? The signs are when more and more departments depend heavily on AI, when you want to scale AI beyond a few test projects, and when you’re using AI to solve real business problems, like generating more revenue or attracting more customers.

Whether your business is still iterating AI projects or reaching the point where you want AI to scale, don’t sit on the sidelines too long before tackling governance challenges: you run the risk of undermining trust with customers and seeing competitors leapfrog ahead of you.

It’s important to embed governance thinking in the design phase. This will reduce problems later on and, above all, engender confidence in AI. This ultimately leads to faster deployments, wider adoption and more-responsible innovation in your business.