The growing need for increased memory bandwidth and higher performance targets of next-generation computer systems creates a fundamental challenge for contemporary system architects. The CPUs strive for higher core count, improved performance per core, and greater power efficiency. Advanced cores need sufficient memory bandwidth, whereas loads of such cores require high DRAM (Dynamic RAM) capacity and high bandwidth. New memory architectures beyond DDR4 are required to meet the next-generation bandwidth-per-core requirements.
DDR5 memory solves the need for higher bandwidth, offering substantial improvements over previous DRAM generations. With a robust list of new and enhanced features, DDR5 upgrades overall system performance and contributes faster data transfer and lower power consumption. DDR5 is primarily about density, making it particularly convenient for enterprise, cloud, and big data applications. It powers the new and emerging technology realms of artificial intelligence, autonomous cars, augmented reality, embedded vision, and High-Performance Computing (HPC).
DDR5 Memory and Its Benefits
DDR5 is a fifth-generation double data rate (DDR) SDRAM (Synchronous Dynamic Random-Access Memory). While previous generations of memory concentrated on minimizing power consumption and were driven by mobile and data center applications, DDR5's primary driver has been the need for more bandwidth. DDR5 promises improvements over DDR4 in memory capacity, speed, and power efficiency. Several key feature additions and improvements enable DDR5's bandwidth increase; primary among these is a dramatic increase in device data rates.
While DDR4 spanned data rates from 1600 MT/s to 3200 MT/s, DDR5 is currently defined with data rates ranging from 3200 MT/s up to 6400 MT/s. This data rate increase not only allows the existing bandwidth-per-core to remain equal as core-per-CPU counts increase (shown with the red arrow in Figure 1), but it also allows for higher bandwidths. The following figure includes data bus efficiencies from a simulated workload to calculate potential effective bandwidth across different DDR4 and DDR5 data rates.
Figure 1: DDR5 Maintains Bandwidth with Increased Core Count (Image Source: Micron)
Another significant change with DDR5 is power architecture. With DDR5, power management moves from the motherboard to the DIMM due to a 12V power management chip (PMIC). DIMMs are printed-circuit board (PCB) modules with several DRAM chips supporting either a 64-bit or a 72-bit data width. Voltage regulation is moved from the motherboard to the individual DIMM, leaving DIMMs responsible for their own voltage regulation needs.
With DDR5, the DRAM and buffer chip registering clock driver (RCD) voltage drops from 1.2 V down to 1.1 V, some eight percent lower than what DDR4 achieved. On its own, this may not seem like a great deal, but keep in mind that companies can employ tens of thousands of machines. In tightly packed servers, memory modules can consume hundreds of watts, so it ultimately adds up. DDR5 also enables the increased reliability, availability, and serviceability (RAS) that modern data centers need.
In DDR5, each DIMM has two channels, as shown in Figure 2. Each of these channels is 40-bits wide: 32 data bits with eight ECC bits. While the data width is the same (64-bits total), having two smaller independent channels improves memory access efficiency. In the dual-channel DDR5 DIMM architecture, the left and right side of the DIMM are each served by an independent channel. The memory modules are installed into matching banks. The overall result is improved concurrency and essentially a doubling of available memory channels in the system.
Figure 2: DDR5 DIMM illustrating two independent sub-channels (Image Source: Micron)
DDR5 also improves error correction (ECC) over DDR4, an essential feature for servers. What's more, DDR5 chips have this feature 'on-die,' thereby removing the memory controller overseeing this function from the CPU. Alongside error transparency mode, post-package repair, and read/write CRC modes, on-die ECC brings additional gains in processing power and paves the way for higher-capacity DRAM, which translates to higher-capacity DIMMs.
DDR5 RAM sticks have the same number (288) of pins as DDR4 DRAM modules. The pin layout, however, is different. That means the user won't be able to use DDR5 modules on a DDR4 slot. DDR5's improved design comprises 32 banks distributed over eight bank groups, as compared to 16 bank structures in DDR4. The burst length on DDR5 is doubled from eight to 16. This will allow a single burst to access up to 64 Bytes of data, importing significant improvement in concurrency (single channel) and memory efficiency (dual channel).
DDR5 has double the capacity of DDR4. This means that it will have the capacity to hold from 32 GB to 64 GB, instead of the 16 GB of DDR4. This larger memory is expected to increase the battery life of the device. It is to be noted that energy-saving RAM is important for all devices that work on battery power, such as headphones and mobile phones, laptops, and tablets. Things look even brighter on the enterprise side. DDR5 supports die stacking, so memory vendors can potentially stack up to 16 dies onto one chip. As a result, a single Load-Reduced DIMM (LRDIMM) can come with a capacity of 4TB.
Table 1: Key feature differences between DDR4 and DDR5 SDRAM
Whom Does This Benefit?
Data centers have the greatest need for the latest memory technology, as they must satisfy the constant demand for lower power requirements, higher density for more memory storage, and faster transfer speeds. Micron has been instrumental in leading the definition and adoption of DDR5 in datacenters, working towards meeting consumer demands for faster boot and loading times while fitting within the platform's tight power constraints. With DDR5, servers work more efficiently and essentially squeeze more ROI out of the investment made in the server. DDR5 enables next generation of server workloads by delivering more than an 85 percent increase in memory performance. The key to enabling these workloads is higher-performance, more dense, higher-quality memory.
Figure 3: DDR5 will enable the next generation of server workloads by delivering more than an 85% increase in memory performance (Image Source: Micron).
Micron's low-power DDR5 (LPDDR5) DRAM is designed to solve the growing demand for higher memory performance and lower energy consumption across a wide array of markets beyond just mobile, including automotive, client PCs, and networking systems built for 5G and AI applications. Not to be confused with DDR5, LPDDR5 is the fifth-generation Low Power Double Data Rate technology specifically intended to be used in mobile devices for their higher power efficiency. The energy efficiency of LPDDR5 enables high-performance computing for automobiles while minimizing power consumption for both electric and conventional vehicles, resulting in greener transportation with lower emissions. Micron's automotive LPDDR5 is also ruggedized to support extreme temperature ranges, and is qualified for automotive reliability standards.
Developers of cloud, enterprise, and artificial intelligence applications are also going to benefit from next-generation DDR5 DIMMs. Memory and storage are the heart of AI. With many AI accelerator options, including GPUs, FPGAs, and ASICs, the heterogeneous data center continues to demand high performance and high-density memory.
Figure 4: Different AI Tasks require different memory and storage (Image Source: Micron)
Micron sees varying memory and storage requirements depending on the landscape and the AI task being performed (Figure 4). The next generation of memory and storage technologies is key to alleviating the bandwidth, latency, density, power, and cost bottlenecks that would otherwise limit future AI applications.
|DRAM, LPDDR4, 32 Gbit, 1G x 32bit, VFBGA||DRAM, LPDDR5, 64 Gbit, 1G x 64bit, FBGA|