Micron expects GDDR7 to improve ray tracing and rasterization performance by more than 30% compared to previous generation VRAM.

Action
Micron expects GDDR7 to improve ray tracing and rasterization performance by more than 30% compared to previous generation VRAM.

RAM chip maker Micron recently made some interesting claims about its next generation of ultra-fast memory for graphics cards, GDDR7. Compared to those currently in use (GDDR6X and GDDR6), Micron states that its next-generation technology "is expected to improve ray tracing and rasterization workload frames per second by 30% or more."

This is a remarkable performance gain by any measure, and is usually a benefit of the significant architectural changes that new GPU designs bring. However, while GDDR7's data transfer rate and bandwidth will certainly be at least 30% faster than the fastest GDDR6/6X currently offered, it's a completely different story when it comes to actual games and applications.

Micron supplies all of the GDDR6X chips used in Nvidia's graphics cards, and the fastest chips they offer are rated at 24 GT/s (24 billion transfers per second). Samsung sells a 20 GT/s GDDR6, and the company also sells an 18 GT/s GDDR6. For GDDR7, however, Micron is the only company to offer specifications at this time, and the two models available for sampling are rated at 28 MT/s and 32 MT/s.

Increasing 24 by 30% yields a value of 31.2, so it is clear where Micron is getting its performance claims from. But suppose we could add that GDDR7 to our current graphics card (ignoring the fact that it won't actually work because the GPU can't use it). Would games and benchmarks be 30% faster as Micron says? "I ran some tests with the RTX 4080 Super and changed the VRAM clock to the widest range I could manage. Everything else remained the same, and the visible performance difference was purely due to the memory clock change. First up are two 3DMark tests, Steel Nomad and Speed Way. The former uses traditional rasterization techniques for all graphics, while the latter incorporates a significant amount of ray tracing.

While we could not make a 30% difference in clock speed, 18% is large enough to estimate how much of a difference the faster VRAM would make. As we saw above, an 18% jump between the slowest and fastest VRAM speeds only resulted in a 5% and 7% increase in frame rates in the 3DMark test.

The two games I checked, Cyberpunk 2077 and Returnal, both showed similar results at 4K. Without ray tracing enabled, the biggest improvement was in minimum frame rate, 12% for Cyberpunk 2077 and 11% for Returnal. With ray tracing enabled (or path tracing in CP 2077) and using DLSS Balanced and Frame Generation together, playable frame rates were obtained, improving by 11% and 7% for the two games.

With an 18% increase in VRAM speed, the maximum improvement I saw was 12%, which is quite good. However, this is only for the minimum frame rate, and the average improvement is only 6% at best. This means that even if the RTX 4080 Super could be equipped with 30% faster VRAM, the average frame rate for these games and tests would not improve as much.

Admittedly, this is only a small fraction of all the games that could be tested, but given how graphically demanding these games are, they were good candidates for performance-enhancing hardware changes. [because I strongly suspect that there are some "ray tracing and rasterization workloads" that show significant frame rate increases when faster VRAM is used, highlighting the benefits that GDDR7 will bring once it appears in the real world Marketing I am sure there will be at least one game using GDDR7 marketing tools to highlight the benefits that GDDR7 will bring when it comes out in the real world.

However, it is worth remembering that the majority of developers are not going to create games with 100% memory-limited rendering performance. Rather, they will be far more dependent on the number of shader units and their underlying architecture.

At this point, you may be wondering what is the point of having GDDR7 on board? The answer lies in the fact that today's GPUs are partially designed around the best available memory technology. There is no point in building a GPU with hundreds of millions of shaders when it is physically and economically impossible to provide sufficient VRAM bandwidth for all shaders.

However, future GPUs will have more shaders than can be purchased today, and more powerful models will require faster VRAM to keep the data fed. The next RTX 5090 may be 30% faster in games than the current RTX 4090, but GDDR7 will not be 30% faster than GDDR6X. It's the usual: more shaders, higher clock speeds, more cache.

.

Categories