Intel Xeon Remains Only Server CPU on MLPerf

Intel Xeon 6 with Performance-cores achieved an average 1.9x performance improvement over 5th Gen Xeon processors.

What’s New: Today, MLCommons released its latest MLPerf Inference v5.0 benchmarks, showcasing Intel® Xeon® 6 with Performance-cores (P-cores) across six key benchmarks. The results reveal a remarkable 1.9x boost in AI performance over the previous generation of processors, affirming Xeon 6 as a top solution for modern AI systems.

“The latest MLPerf results demonstrate Intel Xeon 6 as the ideal CPU for AI workloads, offering a perfect balance of performance and energy efficiency. Intel Xeon remains the leading CPU for AI systems, with consistent gen-over-gen performance improvements across a variety of AI benchmarks.”

—Karin Eibschitz Segal, Intel corporate vice president and interim general manager of the Data Center and AI Group

Why It Matters: As AI adoption accelerates, CPUs are essential for AI systems, serving as the host node to manage critical functions like data preprocessing, transmission and system orchestration. Intel continues to stand out as the only vendor to submit server CPU results to MLPerf.

In MLPerf Inference v5.0, Intel Xeon 6 with P-cores achieved an average 1.9x performance improvement over 5th Gen Intel® Xeon® processors in key benchmarks, including ResNet50, RetinaNet, 3D-UNet and the new GNN-RGAT. This reinforces Intel Xeon 6 as a preferred CPU for AI and highlights Xeon as a compelling alternative for smaller language models.

Intel has made significant strides in AI performance over the past four years. Since its first Xeon submission to MLPerf in 2021 with 3rd Gen Intel® Xeon® processors, Intel has seen a dramatic 15x performance improvement on ResNet50. Software optimization has further contributed to a 22% gain in GPT-J and an 11% gain in the 3D U-Net benchmark.

Intel Xeon 6 processor image with text overlay highlighting its suitability for data centers and AI workloads. Features include a 1.9x performance increase, 15x performance growth, and a 22% boost. Mentions MLPerf v5.0 results.

How Intel Supports Its Customers: These new MLPerf results demonstrate Intel Xeon’s exceptional performance across solutions from original equipment manufacturers (OEMs) and ecosystem partners. As AI workloads become more integrated with enterprise systems, OEMs prioritize Xeon-based systems to ensure their customers achieve the best AI performance.

Intel worked alongside four key OEM partners – Cisco, Dell Technologies, Quanta and Supermicro – that submitted results with Intel Xeon 6 with P-cores, showcasing diverse AI workloads and deployment capabilities.

More Context: MLPerf Inference v5.0 ResultsIntel Xeon 6 Processors (Press Kit)

The Small Print:

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.

Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. Visit MLCommons for more details. No product or component can be absolutely secure.