GPUs in the future will become faster. The details of HBM3 are slowly revealed, the first vague specification comes from SK HynixBut Rambus announced that the company has developed a new HBM3-ready combination PHY and memory controller, which brings more information. This includes the possibility of HBM3 reaching 8.4 Gbps-per-pin, 1.075 TB/s throughput, and supporting up to 16 memory channels and 16-Hi memory stack. This is more than twice the capacity and bandwidth provided by HBM2E and heralds the potential acceleration of GPUs and SoCs in the future.
Like HBM2E, the new HBM3 standard uses a 1024-bit wide interface with 64 bits per channel. However, HBM3 supports 16 channels. This is twice the number of channels using HBM2, resulting in the largest share of performance improvements, including more than doubling the throughput. In addition, the architecture also supports pseudo channels per channel (up to 32 pseudo channels), allowing fine-tuning of traffic based on the size of memory access, which is particularly helpful for AI workloads.
|HBM3 based on Rambus peak performance||HBM3||HBM2 / HBM2E||HBM|
|Maximum pin transfer rate (I/O speed)||8.4||3.2 Gbps / 3.65 Gbps||1 Gbps|
|Maximum chip per stack||4 (4-Hi)||8 (8-Hi) / 12 (12-Hi)||16 (16-Hi)|
|Maximum packaging capacity||64GB||24GB||4GB|
|Maximum bandwidth||1075 GBP||410/460 GBP||128 GBps|
The increase in the number of memory channels supports more memory chips, thereby supporting up to 16-Hi stacks (up to 32 Gb per channel), providing a total capacity of up to 32GB, which may reach 64GB in the future.
The company said that when paired with SK Hynix’s PHY and memory controller, it plans to achieve a throughput of 1.075 TB/s and a throughput of 8.4 Gbps/pin through HBM3. This is more than twice the HBM2E indicator, which weighs 460 GB/s and 3.2 Gbps/pin, respectively. It is worth noting that these specifications are highly matched with the PHY and memory controller functions, but memory vendors need some time to improve their products to support such speeds. Therefore, we should expect the performance of leading products to decrease.
Rambus also provides its customers with reference 2.5D package designs (including interposers and packages) to accelerate the integration of its PHY and memory controllers with SoCs. These have fairly standard interposer designs that we should consider. These interposer designs are connected to the memory package via micro bumps.
Rambus products fully support the JEDEC specification, so it does not support (currently) Exotic HBM-PIM (Memory Processing) technology With embedded chip processing capabilities. These memories are still in the early stages of industry adoption by several memory manufacturers, but to a large extent they are considered a must for future JEDEC support.
Many details of the actual JEDEC HBM3 specification are still vague, because the standards bodies have not officially disclosed the details. However, as the ecosystem continues to evolve and players such as Rambus and SK hynix begin to share preliminary details, the situation becomes clearer.
Rambus told us that the first SoCs using its design will be available at the end of next year or early 2023, so we expect the official specifications to be announced in the next few months.