Comparison reviews

Rambus output HBM3 details: 1.075 TBps bandwidth, 16 channels, 16-Hi stack

GPUs in the future will become faster. The details of HBM3 are slowly revealed, the first vague specification comes from SK HynixBut Rambus announced that the company has developed a new HBM3-ready combination PHY and memory controller, which brings more information. This includes the possibility of HBM3 reaching 8.4 Gbps-per-pin, 1.075 TB/s throughput, and supporting up to 16 memory channels and 16-Hi memory stack. This is more than twice the capacity and bandwidth provided by HBM2E and heralds the potential acceleration of GPUs and SoCs in the future.

Like HBM2E, the new HBM3 standard uses a 1024-bit wide interface with 64 bits per channel. However, HBM3 supports 16 channels. This is twice the number of channels using HBM2, resulting in the largest share of performance improvements, including more than doubling the throughput. In addition, the architecture also supports pseudo channels per channel (up to 32 pseudo channels), allowing fine-tuning of traffic based on the size of memory access, which is particularly helpful for AI workloads.

HBM3 based on Rambus peak performance HBM3 HBM2 / HBM2E HBM
Maximum pin transfer rate (I/O speed) 8.4 3.2 Gbps / 3.65 Gbps 1 Gbps
Maximum chip per stack 4 (4-Hi) 8 (8-Hi) / 12 (12-Hi) 16 (16-Hi)
Maximum packaging capacity 64GB 24GB 4GB
Maximum bandwidth 1075 GBP 410/460 GBP 128 GBps

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button