What is the HBM2?: Is It a Relic for Consumers?

In this article, I will write about the HBM2 which had been used in high-end class GPU, especially Radeon Vega series from AMD or TITAN V from NVIDIA. The HBM (High Bandwidth Memory) is a stacked memory that realize the very high memory bus width.

Two ways to extend the memory bandwidth

There are two ways to extend the memory bandwidth. One is increasing the memory clock, which can be seen in the progress of GDDR. The other is expanding the bus width, and the stacked memory such as the HBM aims at this.

The memory clock and the memory bus width are related as shown below. Extending the memory bandwidth can be thought as conveying more cargo to a location. If cargo are carried by trucks, getting up speed the memory clock corresponds to accelerating the trucks, and expanding the bus width corresponds to increasing the lane of the road. It can be seen that both methods contribute to extending the memory bandwidth.

The Structure of the HBM

HBM has two main features. One is that multiple memories are stacked and connected by TSV (Through-Silicon Via), and the other is that a sub-board called Silicon Interposer is interposed between the processor and memory.

Some advantages of the HBM

When trying to achieve a high bus width, a physical distance between a memory and a processor has increased, and as a result, the operating voltage and the power consumption get higher. On the other hand, a stacking memory can save the mounting area (see below) and solves the above problems.

Also, a TSV has a short connection distance, so it makes less resistance and less possibility to suffer noise. Thus, power consumption can be reduced, waveform deterioration and signal delay can be restrained, and high-speed operation can be achieved.

The silicon Interposer is a substrate made from silicon, and it can reduce the operating voltage and the power consumption due to high electrical conductivity of silicon. In addition, silicon allows large amounts of wiring in tight spaces, so it can made that wire which has high bus width are connected directly between memory and processors (without to bundle signals). It also contributes to reduce in mounting area compared to wire on a substrate.

The serious disadvantage of the HBM

However, the HBM has a fatal disadvantage of high cost. It is inevitable as long as the TSV and the Silicon Interposer are adopted. Due to this problem, and the progress of GDDR, HBM2 is no longer used in GPUs for consumers (actually, AMD had adopted HBM2 in the Vega series, but in followng Navi series, they has adopted GDDR6).

Probably, the best and last GPU for consumers which equips HBM2 will be Radeon VII (with 16GB VRAM). HBM2 which realize high bus width is attractive, but will it become a relic?

この記事では,ハイエンドGPU,特にAMDのRadeon VegaシリーズやNVIDIAのTITAN Vで使用されていたHBM2について説明します.HBM2(High Bandwidth Memory 2)は非常に広いメモリバス幅を実現する積層型メモリです.





HBMには大きく2つの特徴があります.1つは複数のメモリが積層されTSV (Through-Silicon Via)でつながれていること,もう1つはプロセッサとメモリの間にシリコンインターポーザというサブ基板が介されていることです.


高いバス幅を実現しようとすると,メモリとプロセッサの間の物理的距離が拡大し,それに伴って動作電圧が上昇,消費電力が増大してしまいます.これに対し, 積層メモリを使用すると実装面積が節約され (下図) ,上記の問題は解決されます.

Radeon VII (HBM2)
RTX 2080 (GRRR6)





おそらく,コンシューマ向けのHBM2を採用した最高で最後のGPUはRadeon VIIになるでしょう (16GBのVRAMを搭載).高いバス幅を実現したHBM2は魅力あるものですが,それは過遺物になってしまうのでしょうか.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s