Interconnects have always played a cornerstone role In HPC. Since the Inception of the Top500 ranking, Interconnect statistics have been predominantly dominated by two competing technologies: InfiniBand and Ethernet. However, even if Ethernet is very popular due to versatility and cost-effectiveness, InfiniBand used to provide higher bandwidth and continues to feature lower latency. Industry seeks for a further evolution of the Ethernet standards to enable fast and low-latency interconnect for emerging AI workloads by offering competitive, open-standard solutions. This paper analyzes the early results obtained from two systems relying on an HPC Ethernet interconnect, one relying on 100G and the other on 200G Ethernet. Preliminary findings indicate that the Ethernet-based networks exhibit competitive performance, closely aligning with InfiniBand, especially for large message exchanges.
Benchmarking Ethernet Interconnect for HPC/AI workloads / Pichetti, Lorenzo; De Sensi, Daniele; Sivalingam, Karthee; Nassyr, Stepan; Cesarini, Daniele; Turisini, Matteo; Pleiter, Dirk; Artigiani, Aldo; Vella, Flavio. - (2024), pp. 869-875. ( SC24-W: Workshops of the International Conference for High Performance Computing, Networking, Storage and Analysis Atlanta November 17 2024-November 22 2024) [10.1109/SCW63240.2024.00124].
Benchmarking Ethernet Interconnect for HPC/AI workloads
Vella Flavio
2024-01-01
Abstract
Interconnects have always played a cornerstone role In HPC. Since the Inception of the Top500 ranking, Interconnect statistics have been predominantly dominated by two competing technologies: InfiniBand and Ethernet. However, even if Ethernet is very popular due to versatility and cost-effectiveness, InfiniBand used to provide higher bandwidth and continues to feature lower latency. Industry seeks for a further evolution of the Ethernet standards to enable fast and low-latency interconnect for emerging AI workloads by offering competitive, open-standard solutions. This paper analyzes the early results obtained from two systems relying on an HPC Ethernet interconnect, one relying on 100G and the other on 200G Ethernet. Preliminary findings indicate that the Ethernet-based networks exhibit competitive performance, closely aligning with InfiniBand, especially for large message exchanges.| File | Dimensione | Formato | |
|---|---|---|---|
|
Benchmarking_Ethernet_Interconnect_for_HPC_AI_workloads.pdf
Solo gestori archivio
Tipologia:
Versione editoriale (Publisher’s layout)
Licenza:
Tutti i diritti riservati (All rights reserved)
Dimensione
1.17 MB
Formato
Adobe PDF
|
1.17 MB | Adobe PDF | Visualizza/Apri |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione



