Intel AI Platforms Accelerate Microsoft Phi-3 GenAI Models

A rectangular microchip with a blue and metallic design is positioned diagonally against a white background. It features complex circuits and patterns, highlighted by contrasting blue and copper tones.

Intel’s upcoming Meteor Lake client PC processors are the first PC platform from Intel featuring a built-in neural VPU, a dedicated AI engine integrated directly on the SoC to power efficiently run AI models. (Credit: Intel Corporation)

Intel, in collaboration with Microsoft, enables support for several Phi-3 models across its data center platforms, AI PCs and edge solutions.

Download image (775 KB)

What’s New: Intel has validated and optimized its AI product portfolio across client, edge and data center for several of Microsoft’s Phi-3 family of open models. The Phi-3 family of small, open models can run on lower-compute hardware, be more easily fine-tuned to meet specific requirements and enable developers to build applications that run locally. Intel’s supported products include Intel® Gaudi® AI accelerators and Intel® Xeon® processors for data center applications and Intel® Core™ Ultra processors and Intel® Arc™ graphics for client.

“We provide customers and developers with powerful AI solutions that utilize the industry’s latest AI models and software. Our active collaboration with fellow leaders in the AI software ecosystem, like Microsoft, is key to bringing AI everywhere. We’re proud to work closely with Microsoft to ensure Intel hardware – spanning data center, edge and client – actively supports several new Phi-3 models.”

–Pallavi Mahajan, Intel corporate vice president and general manager, Data Center and AI Software

Why It Matters: As part of its mission to bring AI everywhere, Intel continuously invests in the AI software ecosystem by collaborating with AI leaders and innovators.

Intel worked with Microsoft to enable Phi-3 model support for its central processing units (CPUs), graphics processing units (GPUs) and Intel Gaudi accelerators on launch day. Intel also co-designed the accelerator abstraction in DeepSpeed, which is an easy-to-use deep learning optimization software suite, and extended the automatic tensor parallelism support for Phi-3 and other models on Hugging Face.

The size of Phi-3 models is well-suited to be used for on-device inference and makes lightweight model development like fine-tuning or customization on AI PCs and edge devices possible. Intel client hardware is accelerated through comprehensive software frameworks and tools, including PyTorch and Intel® Extension for PyTorch used for local research and development and OpenVINO™ Toolkit for model deployment and inference.

What’s Next: Intel is committed to meet the generative AI needs of its enterprise customers and will continue to support and optimize software for Phi-3 and other leading state-of-the-art language models.

For performance and technical details, visit the Intel Developer Blog.

More ContextIntel Developer Blog | Microsoft Phi-3 blog