Intel Uncorks OpenVINO

New Toolkit Is a Big Move in Computer Vision for AI at the Edge

On 16 May, Intel announced a software development kit for computer vision uses in network-edge devices. Known as OpenVINO, which stands for Open Visual Inference and Neural Network Optimization, the toolkit allows users to take neural models developed in the cloud and deploy them onto edge devices powered by different types of Intel processor. The company claims OpenVINO has the broadest compatibility with the many neural network development frameworks available, including Caffe, MXNet and TensorFlow.

The most significant partner announcement in the launch of the product was that OpenVINO is integrated into Amazon Web Services’ DeepLens camera, announced in November 2017.

Other players have also made moves in this area in recent weeks. In April, Qualcomm released its Vision Intelligence Platform, based on two new chipsets and supported by its artificial intelligence engine and computer vision software development kit (see Qualcomm Stakes a Claim in Vision Intelligence).

More recently Microsoft announced its Project Kinect for Azure kit, enabling computer vision development in products like drones. It also revealed that its custom cognitive vision service will be the first of its artificial intelligence solutions that will now run on its Azure IoT Edge platform (see Microsoft Trumpets Progress at Build 2018).Together with Qualcomm, Microsoft further announced a vision-based artificial intelligence developer kit built on Qualcomm processors that runs Azure IoT Edge.

Two aspects differentiate Intel’s approach with OpenVINO. The first is that it offers clickable options for developers to choose which type of processor to use for running artificial intelligence models, including CPUs, graphics processing units (GPU), Movidius Vision Processing Units and Altera field-programmable gate arrays. The software then optimises the model to run on the chosen processor automatically, even allowing a mix of devices to be used at the same time. This lets developers scale the use of the model for a range of different device types and select the most appropriate processor type for specific workloads. The toolkit also offers future-proofing by allowing for easy device upgrades.

The second distinguishing factor is that despite being optimised for computer vision, OpenVINO can work with any neural network model. This opens up the possibility of a road map of similar software development kits optimised for other artificial intelligence applications, including time-series predictive analysis and audio processing. Indeed, we believe there’s a significant opportunity for a broader launch once the computer vision use case has established itself.

Although the launch by Intel was relatively low-key, these two differentiators mean that the arrival of OpenVINO is a big deal in the market, with the potential to build a large position for Intel. This is underlined by the fact that the toolkit has been adopted by several companies before being released, so it already has commercial momentum. For example, Dahua Technology uses the solution for smart city and traffic solutions, GE Healthcare for medical imaging, and Hikvision for industrial and manufacturing safety. Other companies implementing it include Agent Vi, Current by GE, Dell and Honeywell.

Computer vision is one of the best opportunity areas for deploying artificial intelligence, and one of the major interests for the Internet of things. Intel’s press release cited research claiming the computer-vision market will be worth over $17 billion by 2023, so, it’s no surprise to see industry heavyweights moving strongly into this domain.

Each of the new approaches will have to earn its place in a market currently dominated by Nvidia with its GPUs. The company has done a great job of creating the perception that artificial intelligence systems need GPUs to work well, especially for training artificial intelligence models. Nvidia is also becoming increasingly vocal about the role of its GPUs for inference (see Instant Insight: Nvidia GPU Technology Conference 2018).

However, Google, Microsoft and others are creating specialist chipsets for their artificial intelligence systems, and it has become clear that several other valid approaches exist for developing and implementing such systems. This will be especially true for artificial intelligence at the network edge in distributed systems, which represent a next wave of growth in deployment of artificial intelligence. In those systems, inference is the dominant mode but power, memory and cost constraints will mean that GPUs may not be the first choice.

The network edge is seeing a huge diversity of approaches, with Apple, Arm, HiSilicon, MediaTek, Qualcomm and a long line of others providing different options for various end-points. The winners are likely to be those that successfully combine silicon, tools, flexibility and scale for developers.

Sign up to our free Daily Insight service here.