Nvidia Preps A100 GPU with 80GB HBM2E memory

[ad_1]
Nvidia recently added an unreleased A100 computing GPU version with 80GB HBM2E The memory uses a standard full-length, full-height (FLFH) card form factor, which means this powerful GPU PCIe Slots are like “regular” GPUs. Given that Nvidia’s A100 and V100 computing GPUs are mainly targeted at servers in cloud data centers, Nvidia prioritizes the SXM version (installed on the motherboard) over the regular PCIe version. However, this does not mean that the company does not have a leading GPU in the conventional PCIe card form factor.
Nvidia’s A100-PCIe accelerator is based on GA100 GPU, with 6912 CUDA cores and 80GB HBM2E ECC memory (With 2TB/s bandwidth) will have the same capabilities as the company’s A100-SXM4 accelerator with 80GB of memory, at least in terms of computing power (version 8.0) and virtualization/instance capabilities (up to seven instances). As far as power limits are concerned, there will of course be differences.
Nvidia has not officially launched its A100-PCIe 80GB HBM2E computing card, but because it was listed Official documents discoverer Graphics card, We can expect the company to launch it in the next few months. Since the A100-PCIe 80GB HBM2E computing card is not yet on the market, the actual price is not known. CDW‘S partners have an A100 PCIe card with 40GB of memory, and the price ranges from US$15,849 to US$27,113, depending on the exact dealer, so it is clear that the price of the 80GB version will be higher.
Nvidia’s proprietary SXM computing GPU form factor has many advantages over conventional PCIe cards. Nvidia’s latest A100-SXM4 module supports a maximum thermal design power (TDP) of up to 400W (40GB and 80GB versions), because it is easier to provide the necessary power for such modules and it is easier to cool them (for example, A refrigerant cooling system is used in A100). In contrast, Nvidia’s A100 PCIe card has a power rating of up to 250W. At the same time, they can be used in rack servers and high-end workstations.
Nvidia’s cloud data center customers seem to prefer SXM4 modules to cards. Therefore, Nvidia first introduced the A100-SXM4 40GB HBM2E module (with a bandwidth of 1.6TB/s) last year, and a PCIe card version was introduced a few months later. The company also first launched the A100-SXM4 80GB HBM2E module (with the faster HBM2E) in November last year, but only recently started shipping.
[ad_2]