ASUS unveiled a comprehensive display of cutting-edge AI solutions aimed at fostering innovation and expanding the frontiers of supercomputing during Supercomputing 2023 (SC23) in Denver, Colorado, held from November 12 to 17, 2023.

The brand, operating from booth number 257, showcased a spectrum of advancements, encompassing generative AI solutions, sustainability breakthroughs in collaboration with Intel®, and the latest hybrid immersion-cooling solutions.

The exhibition at SC23 featured ASUS highlighting the capabilities of the NVIDIA-qualified ESC N8A-E12 HGX H100 eight-GPU server. Empowered by dual-socket AMD EPYC 9004 processors, this server is specifically designed for enterprise-level generative AI, incorporating market-leading integrated capabilities.

ASUS also shared plans to provide an update to the H100-based system, featuring an H200-based drop-in replacement in 2024, aligning with NVIDIA’s recent announcement of the H200 Tensor Core GPU at SC23. The H200 GPU stands out as the first of its kind to introduce HBM3E, promising faster and larger memory to drive the acceleration of generative AI and large language models.

Another notable exhibit at SC23 was the Arm-based 2U4N server, ASUS RS720QN-E11. This server, engineered around the NVIDIA Grace CPU Superchip and NVIDIA NVLink®-C2C technology, boasted a power-efficient dense infrastructure and D2C cooling solutions. ASUS also introduced a server featuring the latest NVIDIA GH200 Grace Hopper Superchip, aimed at empowering scientists and researchers to address the world’s most complex challenges by accelerating AI and HPC applications that involve processing terabytes of data.

asus demos ai immersion cooling solutions sc23 1

Complementing the hardware showcases, ASUS hosted a series of session talks at SC23, featuring speakers from prominent industries and domains. The sessions included demonstrations, relevant content, and other engaging activities.

Renowned for its expertise in the AI-supercomputing domain, ASUS provided optimized server design and rack integration tailored to meet the demands of AI/HPC workloads. The company also presented a no-code AI platform with a complete in-house AI software stack, facilitating businesses in expediting AI development for LLM pre-training, fine-tuning, and inference with reduced risk and accelerated progress.

Furthermore, ASUS leveraged its experience as a supercomputer operator in Taiwan, managing both operations and business support systems (OSS and BSS). This involved collaborating with customers to realize data-center-infrastructure strategies that optimize operating expenses (OpEx).

Leave a Reply

Your email address will not be published