Intel, Google Deepen Collaboration to Advance AI Infrastructure
NEWS HIGHLIGHTS:
- Intel® Xeon® processors to continue powering Google Cloud infrastructure across AI, inference and general-purpose workloads
- Expanded co-development of custom ASIC-based infrastructure processing units (IPUs) to improve efficiency, utilization and performance at scale
- Collaboration reinforces the central role of CPUs and IPUs in modern, heterogeneous AI systems
SANTA CLARA, Calif., April 9, 2026 – Intel Corporation (NASDAQ: INTC) and Google today announced a multiyear collaboration to advance the next generation of AI and cloud infrastructure, reinforcing the critical role of CPUs and custom infrastructure processing units (IPUs) in scaling modern, heterogeneous AI systems.
As AI adoption accelerates, infrastructure is becoming more complex and heterogeneous, driving increased reliance on CPUs for orchestration, data processing and system-level performance. Through this collaboration, Intel and Google will align across multiple generations of Intel® Xeon® processors to improve performance, energy efficiency and total cost of ownership across Google’s global infrastructure.
AI doesn’t run on accelerators alone - it runs on systems. And CPUs are at the core of those systems
Google Cloud continues to deploy Intel Xeon processors across its workload-optimized instances, including the latest Intel Xeon 6 processors powering C4 and N4 instances. These platforms support a broad range of workloads—from large-scale AI training coordination to latency-sensitive inference and general-purpose computing.
In parallel, Intel and Google are expanding their co-development of custom ASIC-based IPUs. These programmable accelerators offload networking, storage and security functions from host CPUs - improving utilization, increasing efficiency and enabling more predictable performance across hyperscale AI environments.
IPUs are a critical component of modern data center architectures. By handling infrastructure tasks traditionally managed by CPUs, they unlock greater effective compute capacity and allow cloud providers to scale more efficiently without increasing overall system complexity. Together, Xeon CPUs and IPUs form a tightly integrated platform balancing general-purpose compute with purpose-built infrastructure acceleration to deliver more efficient, flexible and scalable AI systems.
Driving Performance and Efficiency at Scale
“AI is reshaping how infrastructure is built and scaled,” said Lip-Bu Tan, CEO of Intel. “Scaling AI requires more than accelerators - it requires balanced systems. CPUs and IPUs are central to delivering the performance, efficiency and flexibility modern AI workloads demand.”
“CPUs and infrastructure acceleration remain a cornerstone of AI systems—from training orchestration to inference and deployment,” said Amin Vahdat, SVP & Chief Technologist, AI Infrastructure, Google. “Intel has been a trusted partner for nearly two decades, and their Xeon roadmap gives us confidence that we can continue to meet the growing performance and efficiency demands of our workloads.”
Building the Foundation for the Next Wave of AI
The expanded collaboration reflects a shared commitment to advancing open, scalable infrastructure for the AI era. By combining general-purpose compute with purpose-built infrastructure acceleration, Intel and Google are enabling a more balanced approach to AI system design - one that improves utilization, reduces complexity and scales more efficiently.
Together, the companies are strengthening the foundation for the next generation of AI-driven cloud services—supporting continued innovation across enterprises, developers and users worldwide.