Smarter 5G Today, Seamless Path to 6G: Intel’s AI-Ready Network Vision

Intel strengthens 5G leadership as Xeon 6 scales and Xeon 6+ approaches—unifying RAN, Core, edge AI and security on one open platform architecture to turn infrastructure into opportunity on the road to 6G.

By Kevork Kechichian, executive vice president and general manager of the Data Center Group

With 6G on the horizon, operators are clear that success won’t come from architectural resets, but from evolving the strong compute foundations already built in 5G. Moreover, progress will come from deploying intelligence responsibly and at scale across existing infrastructure — not by adding complexity, but by strengthening what’s already working.

That’s the lens we’re bringing to Mobile World Congress 2026. This next chapter of network evolution will be won by those with proven industry depth and partner know-how — the ones who make networks simpler, more secure, and more efficient so inference can be deployed within real world performance targets, power and economic constraints.

What we’re hearing from operators

Across the ecosystem, a few themes keep repeating:

  • Inference should be inherent. There’s a need for AI built directly into the network — not added through new accelerators or disruptive architectural changes.
  • Efficiency is imperative. To free resources for new revenue-generating services, decisions are driven by power savings, infrastructure consolidation, and lowering TCO — all while keeping pace with evolving end‑user demands and fast‑shifting usage patterns.
  • Openness builds trust. Operators want a secure, stable, production grade platform — one that’s open and proven in commercial networks, and offers a low risk evolution path toward 6G.

Intel and its partners are bringing more AI compute to the RAN and Core right now with products like the Intel Xeon 6 with E-cores, Xeon 6 SoC and Intel Ethernet 800 and 600 series. Intel’s approach is straightforward: deliver an open, secure compute foundation that can run critical workloads — network functions, security, enterprise services, and AI inference — on one platform. With a consistent foundation, operators can modernize each generation without rip‑and‑replace, turning infrastructure into a lever for faster services and better economics. These benefits extend to consumers as well, enabling more reliable connectivity, more personalized experiences, and greater cost‑efficiency.

AI in networks isn’t “CPU vs. GPU” — it’s right compute for the workload

It’s tempting to reduce this discussion to a binary debate: CPU versus GPU. But that’s not how infrastructure evolves — and it’s not how operators build networks.

Different AI workloads will use different forms of compute, and the most effective strategy is to match each workload with the architecture that delivers the best mix of performance, efficiency, cost, and ease of deployment. Intel Xeon 6 with E-cores and Intel Xeon 6 SoC can expand network capacity, enhance productivity and AI capabilities in the Core and RAN respectively— all while maintaining openness and operator control.

What doesn’t scale is applying a GPU-first worldview indiscriminately to inference-heavy network workloads. That approach can increase cost and complexity, introduce new operational silos, and force architectural changes that aren’t justified by the workloads themselves.

In networks, the question isn’t “Can we run AI?” It’s “Can we run AI without re-architecting everything we already operate – and what impact will it have on our cost and power budgets now and for the foreseeable future?”

In the RAN, AI is about matching each workload to the right compute—not defaulting to discrete accelerators. Intel Xeon 6 SoC integrates AI acceleration directly into the vRAN stack using Intel Advanced Matrix Extensions (AMX) and Intel vRAN Boost to run the vast majority of inference on the server itself—without the cost, power, complexity, or space demands of separate AI hardware. This delivers real impact: lower TCO, better use of existing infrastructure, and AI that can be deployed in live networks today with no architectural overhaul. For operators exploring AI but prioritizing efficiency and affordability, Xeon 6 SoC enables “AI without compromise” which is predictable performance, simpler operations, and easy scaling across thousands of cells.

Here are a few real world examples of how operators are using the Intel Xeon 6 SoC in today’s networks—demonstrating how its architecture delivers the right compute for AI workloads without forcing a CPU versus GPU tradeoff.

  • Rakuten Mobile is working with Intel to leverage the built‑in AI acceleration of Intel Xeon 6 SoC, jointly training, optimizing, and deploying advanced AI models tailored for demanding RAN workloads with ultra‑low, real‑time latency requirements.
  • Vodafone has committed to adopting Intel Xeon 6 SoCs for its large‑scale Open RAN and vRAN modernization across Europe, building on its earlier UK deployments where Intel Xeon powered its first commercial ORAN rollouts.

Powering secure, more energy-efficient networks for Intel’s Customers and Partners

Intel already powers the vast majority of commercial 5G at global scale, with an unmatched footprint in virtualized Core and RAN. With the Xeon 6 processor family — including offerings tuned for Core, RAN, and Edge requirements — we’re extending that leadership with more capacity per watt, integrated capabilities, and a software foundation designed to help operators modernize without disruption.

In the 5G Core Intel is everywhere with momentum for Xeon 6 with E-cores accelerating as operators look for proven platforms that can address rising traffic, tightening energy constraints, and increasing operational costs. This momentum reflects more than a decade of collaboration across the ecosystem, delivering 5G core and telco cloud solutions on Intel silicon and software that achieve meaningful power savings without compromising performance, service quality, or deployment simplicity.

As 5G core requirements intensify, power efficiency alone is no longer sufficient to address growth and emerging operational challenges. Intel Xeon 6 with E-cores meets operator demand for platforms that combine power efficiency with built-in security and long-term flexibility. The architecture integrates zero trust security through TDX to protect sensitive data in use, accelerates data in transit with QAT, and enables 5G core AI inferencing to run on existing infrastructure—allowing new capabilities to be introduced and scaled without disruptive network redesigns.

Here are a few real‑world examples of Intel Xeon 6 with E-cores in action since its launch just over a year ago:

  • SK Telecom is deploying Xeon 6 with E-cores and Intel Ethernet 800 Series product in its mobile core production environment.
  • NTT DOCOMO has selected Xeon 6 with E-cores and Intel Ethernet E830 Network Adapter for next generation mobile core deployments.

Looking ahead, network equipment providers (NEPs) and service providers have already seen firsthand how Intel’s E‑core architecture delivers efficiency, density, performance and security across today’s Core infrastructure environments. As customer needs evolve toward platforms that offer predictable performance, strong reliability, and efficient scalability to reduce TCO, Intel is advancing to the next step in the Xeon 6 roadmap: Intel Xeon 6+. Built on Intel 18A and designed for exceptional efficiency, Xeon 6+ gives operators a platform that scales workloads aggressively, cuts energy use, and enables more intelligent network services. Moreover, it brings increases in core density while reducing power consumption—directly improving total cost of ownership. From 5G infrastructure to cloud‑native applications, these processors are engineered to optimize performance, efficiency, and cost, redefining data‑center economics on the road to 6G. We’re previewing this next generation here at MWC, with more details to come in the months ahead.

The takeaway

AI will be everywhere in the network. But if we want it to scale, we have to anchor the conversation in reality:

  • Inference-first: deploy real-time decisioning across RAN, Core, and Edge.
  • Efficient and operationally simple: designed for telco constraints, not lab benchmarks.
  • Open and trusted: interoperable platforms that preserve operator control and resilient supply chains.
  • Outcome-driven: judged by measurable improvements — not hype.

MWC is a direction-setting moment. Our focus is straightforward: help operators turn infrastructure into opportunity with an open, secure, efficient compute foundation that modernizes generation to generation — and makes AI practical without costly reinvention.

That’s the most credible path from 5G’s promise to 6G’s potential