San Francisco Event Highlights Real Progress
Google Cloud has made many important strides in artificial intelligence (AI) since Google was famously declared an “AI-first” company by CEO Sundar Pichai back in 2017.
Over the past 12 months, the company has focussed its strategy and investments on making AI “simple, fast and useful” for enterprises, and the approach is paying off.
Now with over 15,000 paying customers, Google is starting to push ahead of major cloud rivals in this area. In fact, our 2018 employee technology survey supported this, with Google earning the top score as the tech company employees believe is leading the advancement of AI in organizations.
Against this backdrop, Google Cloud held a one-day event for 500 customers and a select number of analysts last week in San Francisco to highlight its efforts in AI. Dubbed Let’s Talk AI, the occasion was designed to show and tell how a diverse set of customers, including Disney, Total, Keller Williams Realty, Scotiabank, Ocado and others, are applying Google’s growing list of AI services to improve their businesses.
Here we take a look at the event, its lead announcements and what they mean for Google’s AI strategy for the enterprise space.
Let’s Talk AI
Google Cloud CEO Diane Greene kicked off the event by repeating her mantra, “security is the enterprise’s biggest concern, AI is its biggest opportunity”, urging companies to develop strategies for data and apply AI to their business processes. She also introduced Andrew Moore, the new vice president of AI, who replaces Dr Fei-Fei Li. Mr Moore joined Google Cloud earlier in 2018 after serving as dean of computer science at Carnegie Mellon University.
Google Cloud made several important announcements. It introduced Kubeflow Pipelines, the latest component of the Kubeflow project, which is an open-source machine learning framework built on the Kubernetes platform. Kubeflow Pipelines is a workbench to compose, deploy and manage end-to-end machine learning workflows in Kubernetes.
It also announced AI Hub, a central location for data scientists in enterprises to find various machine learning content, both public and private, such as data pipelines and notebooks to facilitate collaboration, reuse and deployment. Google said the repository will be seeded with searchable content from its Cloud AI and Research services, and will integrate with other major sources such as Kaggle and TensorFlow Hub.
Finally, the company also rolled out updates to its cloud tensor processing units and released in beta mode three new features in its Cloud Video Intelligence application programming interface (API).
Working to Democratize AI
The announcements reinforce Google’s large-scale investments over the past 12 months. AI now pervades most of its development initiatives internally, having become central to its search, YouTube recommendations, data-centre cooling systems, G Suite features and malware detection capabilities, for example.
But outside Google and the tech industry in general, adoption of machine learning has been low. There are just 10,000 people estimated to be working in deep learning around the world, and only 2 million data scientists. Google Cloud’s strategy is therefore aimed at making AI more accessible to organizations needing custom solutions, as well as to the more than 23 million developers who have yet to get started with the technology.
To support this goal, the company is focussing on four domains:
- Data and analytics tools, including its BigQuery data warehouse suite.
- Its Cloud AI platform, made up of custom hardware machine learning accelerators, its machine learning engine, Kubeflow and support for open-source libraries like TensorFlow, Apache Spark and Torch.
- Cloud AI Building Blocks, a set of off-the-shelf machine learning models and developer APIs for AI functions such as vision, language, conversation, inference and customization.
- And a set of business solutions for contact centres, document understanding and job discovery.
What Sets Google Cloud Apart?
Google has one the most competitive AI portfolios in the industry. But it faces fierce competition from Amazon Web Services, Microsoft and IBM in many of these domains. So where does it really set itself apart? The event highlighted five crucial areas where Google Cloud differentiates itself against these players.
- Cloud AutoML. Arguably Google’s most important AI product set, its suite of machine learning products automates and customizes the machine learning process for developers. The company stated in July 2018 that its beta program, launched in January, had more than 18,000 customers signed up. This number is over 30,000 today.
- Tensor processing units. Google’s unique custom chips used to accelerate neural network computation behind the scenes of its services are becoming popular with enterprises seeking higher speed and accuracy in training machine learning models. Google also broadened its offering to edge devices this year, using TensorFlow Lite to unlock more opportunities in the Internet of things. At the event, it said that LG is using its tensor processing units in cameras in assembly lines to detect manufacturing anomalies more quickly and accurately.
- Hybrid cloud and Kubernetes. Its open-source Kubeflow products, launched just this year, are early stakes in the ground in hybrid cloud and distributed machine learning on Kubernetes containers, a hugely popular and flexible platform for enterprises that Google pioneered.
- Kaggle. With more than 2 million members and 12,000 data sets available on the platform, Kaggle is the largest credentialled network for data scientists in the world. The community is a great asset to inform strategy and we believe a fertile ground to integrate Google Cloud’s AI products including tensor processing units, Kubeflow and BigQuery in the future.
- Research. Google Cloud has massive research assets at its disposal from parent Alphabet, including Google Brain and DeepMind, but it has been quiet on how it exploits research for enterprises. Nonetheless, it’s doing more than meets the eye here. We expect applied AI from research to become a big advantage for Google Cloud over the next few years.
Trusted AI and Governance: the Next Priorities
Despite these strengths, Google Cloud must put more effort into improving trust in the technology and its tools, as well as into governance of AI. The company ended the keynote presentation at the event with arguably its most comprehensive discussion about responsible AI to date. It outlined its principles in this area and shared perspectives on important topics such as ethics, AI for social good, bias detection and algorithmic transparency.
Although these are positive first steps, it must continue to build trust in AI, currently a major barrier to adoption in organizations, by also focussing on its governance. This not only means educating customers, but also bringing some of its internal tools to its platforms to help improve governance of AI for customers in areas such as explainable AI models, bias detection, health and management of models, auditability and, above all, security and privacy.
According to results from our forthcoming survey of IT decision-makers, the ability of AI systems to ensure data security and privacy is now the most important consideration for investment in machine learning, cited by 41 percent of decision-makers, representing a big jump over 2017.
AI has become Google’s most important and successful weapon in its attempts to shake off its underdog status in the cloud wars. The event both talked and walked its solid progress over the past year and, although we expect it to make further improvements to its portfolio, Google must now expand its strategy in these crucial areas. Trusted AI and governance will be crucial domains as AI competition heats up in 2019.
Subscribe to our blog
Make sure you don't miss out on our fresh insights on topical news in the connected world
"*" indicates required fields