Supercomputing for All is Closer than You Think

The image shows the Intel logo overlaid on a colorful silicon wafer with a robotic arm working on it, symbolizing semiconductor manufacturing and technology.

Igniting the Next Era of Supercomputing for All

High Performance Computing is the foundation of research and discovery that improves the lives of every human on the planet. As we enter the Exascale era, the combination of accelerated and high performance computing, AI and deep learning has resulted in an exponential growth of data and the need for an unprecedented pace of innovation. From converged AI and HPC workloads to multipurpose systems to diverse architectures, we’re asking more of our HPC systems than ever before. Join Intel at SC21 and learn about our HPC strategy and new innovations including the latest Intel® Xeon® Scalable processors, data center GPUs and powerful software tools. Together, let’s accelerate the next era of innovation in HPC.

High performance computing (HPC) represents the pinnacle of technology, with some of the world’s most significant modern-day discoveries made using these advanced machines. Today, we are at the threshold of a new generation of HPC, where the technology’s scalability and ubiquity can transform all our lives.

Nowhere has this been more obvious than with the battle against COVID-19. From the start of the pandemic, the scientific and research communities tapped these advanced supercomputers – both within research labs and in cloud HPC-as-a-service environments – to monitor, study, treat and eventually develop the drugs used to treat the SARS-CoV-2 novel coronavirus. The speed at which this was done was breathtaking and would have been impossible had it not been for the broad availability of HPC technologies.

As we welcome the exascale era of computing, we’ve only gotten started. Supercomputing already facilitates scientific discoveries and addresses critical issues like climate change and cures for chronic diseases. In the future, it will help solve problems and crises we have yet to encounter – or even imagine.

Intel’s objective is to democratize HPC and deliver “supercomputing for all.” To do this, we must embrace the diverse technologies needed to deliver orders-of-magnitude performance improvements, transform the accessibility of HPC and rethink how we build the components that power these systems. The foundation of our strategy is built around performance, openness and scale.

Performance: It Starts with a Ubiquitous Compute Architecture

The x86 architecture is the workhorse of today’s HPC, powering the vast majority of all systems. This enormous installed base provides developers and customers access to the world’s largest ecosystem to support and deploy their HPC workloads. The x86 architecture also provides consistent generational performance improvements.

The current generation of Intel® Xeon® Scalable processors has been extensively adopted by our HPC ecosystem partners, and we are adding new capabilities with Sapphire Rapids – our next-generation Xeon Scalable processor that is currently sampling with customers. This next-generation platform delivers multi-capabilities for the HPC ecosystem, bringing for the first time in-package high bandwidth memory with HBM2e that leverages the Sapphire Rapids multi-tile architecture. Sapphire Rapids also brings enhanced performance, new accelerators, PCIe Gen 5 and other exciting capabilities optimized for AI, data analytics and HPC workloads.

HPC workloads are evolving rapidly. They are becoming more diverse and specialized, requiring a mix of heterogeneous architectures. While the x86 architecture continues to be the workhorse for scalar workloads, if we are to deliver orders-of magnitude performance gains and move beyond the exascale era, we must critically look at how HPC workloads are run within vector, matrix and spatial architectures, and we must ensure these architectures seamlessly work together.Intel has adopted an “entire workload” strategy, where workload-specific accelerators and graphics processing units (GPU) can seamlessly work with central processing units (CPU) from both hardware and software perspectives. We are deploying this strategy with our next-generation Intel Xeon Scalable processors and Intel® Xe HPC GPUs (code-named “Ponte Vecchio”) that will power the 2 exaflop1 Aurora supercomputer at Argonne National Laboratory. Ponte Vecchio has the highest compute density per socket and per nodes, packing 47 tiles with our advanced packaging technologies: EMIB and Foveros. There are over 100 HPC applications running on Ponte Vecchio. We are also working with partners and customers including – ATOS, Dell, HPE, Lenovo, Inspur, Quanta and Supermicro – to deploy Ponte Vecchio in their latest supercomputers.

Democratizing HPC Through Openness

Democratizing HPC and delivering supercomputing for all demands an open, collaborative approach. Intel is committed to delivering open platforms supported by industry-defining standards that will foster broad collaboration. Our goal is to drive standards and create key hardware platforms that the industry rallies around and builds upon.

An area that has been hindered by the lack of adopted standards is the programming of GPUs. Since GPUs moved into the HPC realm, the industry has faced the challenge of separate software stacks for CPU and GPU workloads. The oneAPI programming model seeks to break down these silos.

oneAPI is an open, unified and cross-architecture programming model for CPUs, GPUs and accelerator architectures (FPGAs and others) that allows for the programming of heterogeneous compute environments, all with a single code base and software stack. This way developers only need to write once across architectures, and customers aren’t locked into a single vendor.

Next month, we will release Intel® oneAPI 2022 toolkits with more than 900 new features since oneAPI 2021 released in December 2020. The upcoming release adds cross-architecture development capabilities for CPUs and GPUs through the first unified C++/SYCL/Fortran compiler and Data Parallel Python. Today we also announced two additional oneAPI centers of excellence, joining the thriving worldwide ecosystem of leading research and academic institutions to deliver open-source code ports and to extend hardware support, new technologies, services and curriculum to increase oneAPI ecosystem adoption.We realize a lot more work needs to be done to truly democratize HPC. The work we are doing today will enable ubiquitous access to the latest HPC technologies in the future.

Manufacturing at Scale: Intel’s IDM 2.0 Strategy

Delivering supercomputing for all requires scale. And Intel is positioned to deliver the supply of components and technologies required to fuel innovation and growth. The differentiated formula of Intel’s IDM 2.0 strategy enables us to deliver a new era of innovation, manufacturing and product leadership to our HPC customers.

We are moving at a torrid pace and executing on our bold, multiyear IDM 2.0 strategy. For our HPC customers, this will give us the ability to use our industry-leading manufacturing capabilities and our leadership in packaging technologies to design the best products and use the best IP to deliver the products that are required to power the next era of supercomputers.

Our commitment to supercomputing for all is unwavering, and we are investing in the technological advances needed to solve the most complex problems.

I look forward to seeing what we can achieve together as we approach the new generation of supercomputing for all.

Jeff McVeigh is vice president and general manager of the Super Compute Group at Intel Corporation.

1 Peak Performance