NVIDIA’s new HGX A100 powers AI

[ad_1]
Nvidia’s powerful A100 GPU Will be part of its HGX AI supercomputing platform (California graphics processing giant) Announce Today, new technologies including 80GB memory modules, 400G Infiniband network and Magnum IO GPUDirect Storage software have also been added.
A100, recently demonstrated its ampere-powered muscles to surpass Titan V as The most powerful GPU In the OctaneBench benchmark test, there are two forms: one is 40GB HBM2E, and the other is 80GB. Larger models have the widest memory bandwidth in the world, transferring more than 2 TB per second. Built on the 7nm process, you will get 54.2 billion transistors arranged into 6912 shading units, 432 texture mapping units, 160 ROPs and 432 tensor cores.
Technologies such as Magnum IO GPUDirect Storage connect these GPUs to huge supercomputer racks, reducing latency by allowing direct access between GPU memory and storage. Up to 400Gb/s Infiniband network allows each 2,048 port switch to have a bandwidth of up to 1.64 Pb/s, capable of connecting more than one million nodes.
Huang Renxun, the founder and CEO of Nvidia, said: “The HPC revolution started in academia and is rapidly expanding to a wide range of industries. “Key drivers are driving the progress of super-exponential and super-Moore’s law. These advances have made HPC a useful tool for industry. . Nvidia’s HGX platform provides researchers with unparalleled high-performance computing acceleration to solve the most difficult problems facing the industry. “
The HGX platform is not entirely designed for home use, but is used by companies such as General Electric. The platform simulates computational fluid dynamics to design new large gas turbines and jet engines. It will also be used to build the next-generation supercomputer at the University of Edinburgh, optimized for computational particle physics to analyze data from large-scale particle physics experiments (such as the Large Hadron Collider).
[ad_2]