Intel-supported Zero-Angle Memory (ZAM) reveals a vertical stacking architecture with nine layers and an estimated bandwidth rivaling Nvidia’s advanced HBM4 memory, setting the stage for a potential shift in AI processor memory technology.
- ZAM modules stack 9 layers vertically, totaling ~9GB DRAM per unit
- Bandwidth projected close to Nvidia’s HBM4 used in Vera Rubin AI platform
- Commercialization led by SoftBank subsidiary Saimemory Corporation
What happened
Intel has championed a new memory technology called Zero-Angle Memory (ZAM) that arranges DRAM chips in a vertical stack comprising nine layers—eight for data storage and one control layer. This architecture increases storage density while potentially enhancing data transfer speeds and efficiency compared to traditional flat memory layouts.
Technical details from an upcoming VLSI conference reveal that each DRAM layer in ZAM holds approximately 1.125GB, resulting in about 9GB total capacity per module. The design leverages precision fusion bonding and Through-Silicon Vias (TSVs) to electrically connect layers through a silicon substrate only three microns thick. This innovation is backed commercially by Saimemory Corporation, a SoftBank subsidiary.
Why it matters
ZAM’s performance ambitions place it near the bandwidth levels of Nvidia’s HBM4 memory standard, which powers the high-end Vera Rubin AI platform. Early projections suggest ZAM could offer 2.5 TBps throughput, potentially two to three times faster than current HBM3 standards, signaling a significant leap for AI-focused memory technology.
If realized, ZAM could challenge Nvidia’s established position in high-bandwidth memory for AI processors by offering an alternative with similar or better bandwidth and integrated power efficiencies. This competition could influence future AI hardware development and supply chain dynamics in the memory industry.
What to watch next
The forthcoming presentation at the June VLSI conference will be pivotal for validating ZAM’s practical viability beyond paper designs. Demonstrating an operational prototype and addressing manufacturing challenges related to bonding multiple DRAM layers without defects will be key hurdles.
Long-term adoption will depend not only on technical specs but also on building an ecosystem with industry support and supply chain readiness. While Nvidia benefits from existing multi-vendor frameworks for HBM4, Intel and Saimemory’s ability to scale production and achieve broad market acceptance will ultimately determine ZAM’s impact.