Intel Official News and Information

Intel Accelerates Development of Artificial Intelligence Solutions with Open Neural Network Exchange Support

Today, Intel announced that it joined the Open Neural Network Exchange (ONNX) to enable enhanced framework interoperability for developers that boosts efficiency and speeds creation of artificial intelligence (AI) and deep learning models. AI and deep learning are transforming how people engage with the world and how businesses make smarter decisions.

Press Kit: Artificial Intelligence

The ONNX format was first announced last month by Microsoft* and Facebook* to give users more choice within AI frameworks, as every modeling project has its own special set of requirements that often require different tools for different stages. Intel, along with others, is participating in the project to provide greater flexibility to the developer community by giving access to the most suitable tools for each unique AI project and the ability to easily switch between frameworks and tools.

Intel’s addition to the open ecosystem for AI will broaden the toolset available to developers through neon and the Intel® Nervana™ Graph as well as deployment through the Intel® Deep Learning Deployment Toolkit. neon will be compatible with other deep learning frameworks through the Intel Nervana Graph and ONNX, providing customers with more choices for frameworks and compatibility with the right hardware platform to fit their needs.

Currently, the ONNX format is supported by Microsoft Cognitive Toolkit*, Caffe2* and PyTorch*, with capabilities expanding over time. Through the increased interoperability and vast hardware and software ecosystem fostered by ONNX and Intel, developers can construct and train models at an accelerated pace to deliver new AI solutions.

Project Brainwave, Microsoft’s FPGA-based deep learning platform for accelerating real-time AI, will also support ONNX in order to help customers accelerate models from a variety of frameworks. Project Brainwave leverages Intel® Stratix® 10 FPGAs to enable the acceleration of deep neural networks (DNNs) that replicate “thinking” in a manner that is conceptually similar to that of the human brain. Microsoft was the first major cloud service provider to deploy FPGAs in its public cloud infrastructure and the technology advancements it is demonstrating today with Intel Stratix 10 FPGAs.

To learn more about how Intel and ONNX are making AI more accessible across industries, visit this Intel Nervana blog post.

About Intel

Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore’s Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers’ greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel’s innovations, go to newsroom.intel.com and intel.com.

© Intel Corporation. Intel, the Intel logo and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.