The successor to HBM2e memory, "ultra-wideband" memory, is on the way, but it will not be available until 2022.

General
The successor to HBM2e memory, "ultra-wideband" memory, is on the way, but it will not be available until 2022.

Micron is developing a new memory standard to drive tomorrow's graphics cards. A few days ago, we covered the revelation that Nvidia's upcoming GeForce RTX 3090 will feature up to 21Gbps GDDR6X memory, and in the same technical brief where we extrapolated that information, Micron also mentioned the ultra-wideband HBMnext standard under development

Unfortunately, the information is not available in the HBMnext standard.

Unfortunately, the tech brief doesn't go into much detail about what to expect from HBMnext, aka next-generation high-bandwidth memory, other than to say when it will be available.

"HBMnext is expected to hit the market in late 2022. Micron is fully involved in the ongoing JEDEC standardization. As the market demands more data, products like HBM will thrive and drive greater volumes," Micron said.

And that's all we get on this issue under the aptly titled "A Glimpse Into the Future" section of the technical overview.

Micron goes into a bit more detail about its HBM2e memory solution, stating that it is fully JEDEC compliant and will be available this year; HBM2e will be offered in 4-layer 8GB and 8-layer 16GB stacks with transfer rates up to 3.2Gbps. For comparison, the latest revision of HBM2 in Nvidia's V100 accelerator is 2.4 Gbps.

HBM2e is primarily for accelerators used for AI training, and it is not only Micron that manufactures it, but Samsung and SK Hynix as well. However, AMD is dabbling in high-bandwidth memory in some of its consumer products, and there are rumors that at least one version of its next-generation Navi GPUs will feature HBM2e, but nothing has been announced yet.

The main reason HBM memory has not featured heavily in consumer products is that it is relatively expensive compared to GDDR5 and GDDR6.

"HBM is a powerful version of an ultra-bandwidth solution, but due to the complex nature of the product, it is a relatively high-cost solution; HBM is targeted at very high-bandwidth applications that are not very cost sensitive," Micron notes.

This type of memory essentially requires stacking multiple layers, which reside on the same silicon imposer as the GPU. The end result is higher density and higher throughput at lower clock rates, but at a much higher price.

Micron has taken its technology overview offline, but Reddit's NuLuumo posted a mirror link that you can still download and view as a PDF document.

Thanks, Tom's Hardware

.

Categories