Though the formal specification has yet to be ratified by JEDEC, the memory industry as a whole is already gearing up for the upcoming launch of the next generation of High Bandwidth Memory, HBM3. Following announcements earlier this summer from controller IP vendors like Synopsys and Rambus, this morning SK Hynix is announcing that it has finished development of its HBM3 memory technology – and according to the company, becoming the first memory vendor to do so. With controller IP and now the memory itself nearing or at completion, the stage is being set for formal ratification of the standard, and eventually for HBM3-equipped devices to start rolling out later in 2022.
Overall, the relatively lightweight press release from SK Hynix is roughly equal parts technical details and boasting. While there are only 3 memory vendors producing HJBM – Samsung, SK Hynix, and Micron – it’s still a technically competitive field due to the challenges involved in making deep-stacked and TSV-connected high-speed memory work, and thus there’s a fair bit of pride in being first. At the same time, HBM commands significant price premiums even with its high production costs, so memory vendors are also eager to be first to market to cash in on their technologies.
In any case, both IP and memory vendors have taken to announcing some of their HBM wares even before the relevant specifications have been announced. We saw both parties get an early start with HBM2E, and now once again with HBM3. This leaves some of the details of HBM3 shrouded in a bit of mystery – mainly that we don’t know what the final, official bandwidth rates are going to be – but announcements like SK Hynix’s help narrow things down. Still, these sorts of early announcements should be taken with a small grain of salt, as memory vendors are fond of quoting in-lab data rates that may be faster than what the spec itself defines (e.g. SK Hynix’s HBM2E).
Getting into the technical details, according to SK Hynix their HBM3 memory will be able to run as fast as 6.4Gbps/pin. This would be double the data rate of today’s HBM2E, which formally tops out at 3.2Gbps/pin, or 78% faster than the company's off-spec 3.6Gbps/pin HBM2E SKUs. SK Hynix’s announcement also indirectly confirms that the basic bus widths for HBM3 remain unchanged, meaning that a single stack of memory is 1024-bits wide. At Hynix’s claimed data rates, this means a single stack of HBM3 will be able to deliver 819GB/second worth of memory bandwidth.
|SK Hynix HBM Memory Comparison|
|Max Capacity||24 GB||16 GB||8 GB|
|Max Bandwidth Per Pin||6.4 Gb/s||3.6 Gb/s||2.0 Gb/s|
|Number of DRAM ICs per Stack||12||8||8|
|Effective Bus Width||1024-bit|
|Voltage||?||1.2 V||1.2 V|
|Bandwidth per Stack||819.2 GB/s||460.8 GB/s||256 GB/s|
SK Hynix will be offering their memory in two capacities: 16GB and 24GB. This aligns with 8-Hi and 12-Hi stacks respectively, and means that at least for SK Hynix, their first generation of HBM3 memory is still the same density as their latest-generation HBM2E memory. This means that device vendors looking to increase their total memory capacities for their next-generation parts (e.g. AMD and NVIDIA) will need to use memory with 12 dies/layers, up from the 8 layer stacks they typically use today.
What will be interesting to see in the final version of the HBM3 specification is whether JEDEC sets any height limits for 12-Hi stacks of HBM3. The group punted on the matter with HBM2E, where 8-Hi stacks had a maximum height but 12-Hi stacks did not. That in turn impeded the adoption of 12-Hi stacked HBM2E, since it wasn’t guaranteed to fit in the same space as 8-Hi stacks – or indeed any common size at all.
On that matter, the SK Hynix press release notably calls out the efforts the company put into minimizing the size of their 12-Hi (24GB) HBM3 stacks. According to the company, the dies used in a 12-Hi stack – and apparently just the 12-Hi stack – have been ground to a thickness of just 30 micrometers, minimizing their thickness and allowing SK Hynix to properly place them within the sizable stack. Minimizing stack height is beneficial regardless of standards, but if this means that HBM3 will require 12-Hi stacks to be shorter – and ideally, the same height as 8-Hi stacks for physical compatibility purposes – then all the better for customers, who would be able to more easily offer products with multiple memory capacities.
Past that, the press release also confirms that one of HBM’s core features, integrated ECC support, will be returning. The standard has offered ECC since the very beginning, allowing device manufacturers to get ECC memory “for free”, as opposed to having to lay down extra chips with (G)DDR or using soft-ECC methods.
Finally, it looks like SK Hynix will be going after the same general customer base for HBM3 as they already are for HBM2E. That is to say high-end server products, where the additional bandwidth of HBM3 is essential, as is the density. HBM has of course made a name for itself in server GPUs such as NVIDIA’s A100 and AMD’s M100, but it’s also frequently tapped for high-end machine learning accelerators, and even networking gear.
We’ll have more on this story in the near future once JEDEC formally approves the HBM3 standard. In the meantime, it’s sounding like the first HBM3 products should begin landing in customers’ hands in the later part of next year.
Data storage requirements have kept increasing over the last several years. While SSDs have taken over the role of the primary drive in most computing systems, hard drives continue to be the storage media of choice in areas dealing with large amount of relatively cold data. Hard drives are also suitable for workloads that are largely sequential and not performance sensitive. The $/GB metric for SSDs (particularly with QLC in the picture) is showing a downward trend, but it is still not low enough to match HDDs in that market segment.
In terms of recent product introductions, we have retail availability of Toshiba's 16TB N300 and X300 drives. The 18TB drives using FC-MAMR are also scheduled to make a retail appearance later this year. Seagate had launched the Ironwolf Pro and Exos 18TB drives last year, and no new capacity updates have been announced for this season yet. The same is the case with Western Digital - the 16TB and 18TB WD Red Pro models were introduced last September. The company did make the OptiNAND announcement in August 2021 - promising the integration of UFS storage in their 20TB+ HDDs in order to improve performance and reliability. With the HDD supply chain seeing some improvements, prices have largely stabilized. Some high-capacity models in the WD Gold line are currently running 15-20% lower than launch MSRPs.
Having very recently reviewed the Matebook X Pro 2021 (13.9-inch), our local PR in the UK offered me a last-minute chance to examine the newest element to their laptop portfolio. The Huawei MateBook 16, on paper at least, comes across as a workhorse machine designed for office and on the go. A powerful CPU that can go into a high-performance mode when plugged in, and sip power when it needs to. No discrete graphics to get in the way, and a massive 84 Wh battery is designed for an all-day workflow. It comes with a color-accurate large 3:2 display, and with direct screen share with a Huawei smartphone/tablet/monitor, it means if you buy into the ecosystem there’s a lot of potential. The question remains – is it any good?
Today, after many weeks, even months of leaks and teasers, Google has finally announced the new Pixel 6 and Pixel 6 Pro – their new flagship line-up of phones for 2021 and carrying them over into next year. The two phones had been teased quite on numerous occasions and have probably one of the worst leak records of any phone ever, and today’s event revealed little unknowns, but yet still Google manages to put on the table a pair of very interesting phones, if not, the most interesting Pixel phones the company has ever managed to release.
This week seems to be Arm's week across the tech industry. Following yesterday's Arm SoC announcements from Apple, today sees Arm kick off their 2021 developer's summit, aptly named DevSummit. As always, the show is opening up with a keynote being delivered by Arm CEO Simon Segars, who will be using the opportunity to lay out Arm's vision of the future.
Arm chips are already in everything from toasters to PCs – and Arm isn't stopping there. So be sure to join us at 8am PT (15:00 UTC) for our live blog coverage of Arm's keynote.