JEDEC Officially Releases HBM4 Memory Standard

Luke James 776 May 14, 2025 May 14, 2025
JEDEC’s HBM4 standard delivers up to 2 TB/s bandwidth, higher capacity (up to 64 GB per stack), and improved efficiency for AI and HPC.

JEDEC has finalized the JESD270-4 High Bandwidth Memory (HBM4) standard , introducing major upgrades in memory speed, efficiency, and scalability. Developed with input from AMD, Nvidia, Google, Micron, and other industry players, HBM4 addresses current memory limitations in AI, high-performance computing (HPC), and advanced data center applications.

SK Hynix shipped the "world’s first 12-layer HBM4 samples" in March 2025.
 

Higher Bandwidth, More Channels, and Scalable Capacity

HBM4 doubles bandwidth over HBM3, reaching 8 Gb/s per pin across a 2,048-bit interface for a total of up to 2 TB/s per stack. The number of independent channels per stack increases from 16 to 32, each divided into two pseudo-channels. This layout enables more parallelism, better concurrency, and less contention—key for AI and HPC tasks that involve simultaneous, high-throughput memory access.

The new standard also increases stack density. HBM4 supports 4-high, 8-high, 12-high, and 16-high configurations with 24 Gb or 32 Gb die densities. A 16-high stack using 32 Gb dies can deliver 64 GB per cube, doubling the capacity of HBM3. This scale supports the growing memory requirements of large AI models and simulation workloads.

 

Backward Compatibility

HBM4 supports flexible voltage options: VDDQ at 0.7 V to 0.9 V and VDDC at 1.0 V or 1.05 V. This allows memory vendors and system designers to trade off power savings against performance as needed. The separation of data and command buses, introduced in HBM4, reduces latency under load and helps avoid resource collisions during simultaneous memory operations.

Reliability is improved with features like directed refresh management (DRFM), which addresses row-hammer risks, and enhanced reliability, availability, and serviceability (RAS) functions. These additions are essential for server, data center, and enterprise applications where fault tolerance matters. HBM4 maintains backward compatibility with HBM3 controllers. A single memory controller can support both generations, making it easier to adopt HBM4 in mixed or staged upgrade environments without re-architecting the platform.

 

Parameter HBM4 Specification
Bandwidth 2 TB/s (8 Gb/s per pin, 2,048-bit interface)
Channels 32 channels (64 pseudo-channels)
Voltage VDDQ: 0.7–0.9V; VDDC: 1.0–1.05V
Stack Height 4-high to 16-high
Die Density 24 Gb or 32 Gb
Max Capacity 64 GB (16-high stack, 32 GB dies)
Refresh Directed refresh management (DRFM)
RAS Features Enhanced error correction and diagnostics

 

Potential Applications for HBM4

HBM4 is expected to see early use in AI model training, large-scale inference, HPC simulations, and advanced graphics workloads. These applications push the limits of memory bandwidth, and HBM4 is designed to relieve those constraints while improving power efficiency and density.

Micron has announced plans for HBM4 production starting in 2026, with HBM4E already in development. SK Hynix and Samsung are also preparing their product lines. Major chipmakers, including AMD, Nvidia, and Google, have confirmed HBM4 compatibility in future compute platforms. With standardized controller compatibility and scalable stacking, JEDEC’s release aims to give the industry a more efficient, interoperable memory solution.

 

Implementation and Integration Constraints

Despite its performance benefits, HBM4 faces some technical and economic barriers to adoption. The manufacturing process—relying on through-silicon vias (TSVs) and tight die stacking—is expensive and complex. This drives up cost compared to DDR or GDDR and limits availability to high-end devices.

System integration is also more demanding. HBM4 modules need to be placed very close to the CPU or accelerator die to minimize latency and maintain signal integrity. That means tighter packaging, advanced substrates, and more involved thermal solutions.

Thermal density is another concern. With high data rates and compact form factors, HBM4 generates more heat per area than traditional memory. System designers need advanced cooling strategies to avoid throttling performance or risking stability.

These challenges may restrict HBM4 adoption to AI, HPC, and graphics platforms with the budget and technical capabilities to implement it effectively. Broader market availability will depend on future refinements in manufacturing, packaging, and integration tooling.

Share
Tags
Industry News