Intel and Ohio Supercomputer Center Double AI Processing Power with New HPC Cluster
Intel, Dell Technologies, Nvidia and OSC disclose plans for a next-gen high-performance computing cluster to empower AI innovation in research and science.
A collaboration including Intel, Dell Technologies, Nvidia and the Ohio Supercomputer Center (OSC), today introduces Cardinal, a cutting-edge high-performance computing (HPC) cluster. Purpose-built to meet the increasing demand for HPC resources in Ohio across research, education and industry innovation, particularly in artificial intelligence (AI).
AI and machine learning are integral tools in scientific, engineering and biomedical fields for solving complex research inquiries. As these technologies continue to demonstrate efficacy, academic domains such as agricultural sciences, architecture and social studies are embracing their potential.
Download full sized images (8 MB)
Cardinal is equipped with the hardware capable of meeting the demands of expanding AI workloads. In both capabilities and capacity, the new cluster will be a substantial upgrade from the system it will replace, the Owens Cluster launched in 2016.
The Cardinal Cluster is a heterogeneous system featuring Dell PowerEdge servers and the Intel® Xeon® CPU Max Series with high bandwidth memory (HBM) as the foundation to efficiently manage memory-bound HPC and AI workloads while fostering programmability, portability and ecosystem adoption. The system will have:
- 756 Max Series CPU 9470 processors, which will provide 39,312 total CPU cores.
- 128 gigabytes (GB) HBM2e and 512 GB of DDR5 memory per node.
With a single software stack and traditional programming models on the x86 base, the cluster will more than double OSC’s capabilities while addressing broadening use cases and allowing for easy adoption and deployment.
The system is also equipped with:
- Thirty-two nodes that will have 104 cores, 1 terabyte (TB) of memory and four Nvidia Hopper architecture-based H100 Tensor Core GPUs with 94 GB HBM2e memory interconnected by four NVLink connections.
- Nvidia Quantum-2 InfiniBand, which provides 400 gigabits per second (Gbps) of networking performance with low latency to deliver 500 petaflops of peak AI performance (FP8 Tensor Core, with sparsity) for large AI-driven scientific applications.
- Sixteen nodes that will have 104 cores, 128 GB HBM2e and 2 TB DDR5 memory for large symmetric multiprocessing (SMP) style jobs.
“The Intel Xeon CPU Max Series is an optimal choice for developing and implementing HPC and AI workloads, leveraging the most widely adopted AI frameworks and libraries,” said Ogi Brkic, vice president and general manager of Data Center AI Solutions product line at Intel. “The inherent heterogeneity of this system will empower OSC’s engineers, researchers and scientists, enabling them to fully exploit the doubled memory bandwidth performance it offers. We take pride in supporting OSC and our ecosystem with solutions that significantly expedite the analysis of existing and future data for their targeted focus areas.”
More: Read the full announcement on the Ohio Supercomputer Center website.