Puget Systems print logo

https://www.pugetsystems.com

Read this article at https://www.pugetsystems.com/guides/142
Share:


Conclusion

Before we proceed any further, we want to reiterate a few key points. First, our conclusion is based solely on the assumption that higher frequency RAM has increased timings. There are a few models available that have both higher frequency and tighter timings which certainly will have a performance edge (although we will leave the quest of how much to a future article), but they are certainly not the norm. Second, our testing was all done with a dedicated video card and as such these results do not apply to systems using onboard CPU graphics.

With those caveats out of the way, you can clearly see how tightly balanced RAM frequency and timings are. While a few of our benchmarks did show differences with the higher frequency RAM (WinRar in particular) the performance gain by using high frequency RAM was highly inconsistent. Especially considering that some of our benchmarks showed worse performance with the higher frequency RAM, it makes it impossible to make a blanket statement regarding whether frequency or timings are more important. Add in the fact that on many of our benchmarks, the performance result with the Intel-based system were the opposite of what we saw with the AMD-based system, the results are so sporadic that the answer is simply not very straight-forward.

Another factor to take into account is the fact that high frequency RAM in general also has a higher failure rate. From our part failure reports, just the small jump from 1333MHz to 1600MHz RAM brings about a threefold increase in failure rates. Now, this is the difference between a .8% failure rate on DDR3-1333 versus a 3.9% failure rate on DDR3-1600, but it is still a notable increase.

So what does this all mean? Based on the results from our testing, we see no reason to use higher frequency RAM with increased timings unless the system will be used for a very specific application that you know will benefit from the higher frequency. The increase in cost and failure rates is simply not worth the very application-specific performance gains.


< Previous

Switching in to analytical mode, are your part failure reports based on frequencies in general or separated by brand, or perhaps IC manufacturer? Does it include memory overclocked by the end user?

As for the scores, I can validate your results 100% (not that you need it.) :) Games rarely have ever showed a benefit when overclocking the RAM frequency compared to overclocking the CPU. However, lower timings do slightly benefit the system's responsiveness like opening and closing big programs.

I often recommend that users buy good names with the lowest memory timings for best overall performance. Sometimes, buying high frequency CAS9 memory, down clocking it and use lower timings works out really well. However, if the user requires the performance of very large programs, it's best to stick with low latency, lower frequency, higher capacity memory for maximum stability.

Posted on 2012-04-03 01:59:52

The failure rates are for the last 12 months between all of the desktop (not saptop or server ECC) RAM we've sold. The majority of the failures occur in-house, although roughly 10% or so do happen in the field. As far as we know, none of the failures were the result of overclocking.

The brands are almost entirely Kingston and Patriot. We've tried other brands, but the failure rates have always been higher so we've largely settled on those two.

Posted on 2012-04-03 22:01:32

We keep track of reliability data by SKU. While you can trace that down to a certain IC manufacturer, it doesn't guarantee that the ODM isn't jumping from one IC chip to another. I can say though that we haven't had a situation in which we felt compelled to track by IC manufacturer. Kingston, by far, puts out the most stable memory we've seen, and it is no coincidence that they are also the most stable in their IC sourcing, and also the most conservative in their frequency binning. This is all just overclocked 1333, right? I've haven't looked for a while, but that was the case last time I checked. In that case, the reliability of the parts is going to be highly dependent on how aggressively the ODM decides to bin their IC chips. That's why OCZ has a 8% failure rate, and Kingston a 0.5% failure rate! :)

In the case of the numbers in this article, the reliability data was just an aggregate of all brands we've offered at those speeds. It pretty clearly shows a trend that you would expect as the memory guys "overclock" these IC chips more and more.

Posted on 2012-04-04 04:44:50
Eric Garay

I was here on the 6th and forgot to say Thanks for the replies and clarification. Agreed on the IC jumping which was very common a few years ago. We would see a very good series launch and then a quiet IC jump to something not as costly which also typically means a drop in quality. Validating your own batches and keeping in contact with the vendor is the way to go. It's always smart to go with the ODM that isn't afraid to keep you aware of any changes and that do their own proven validation. You're doing is the most common sensible way.

Posted on 2012-04-17 00:05:59
Rohit

Thanks, your article is very useful and informative. You may like to visit Om Nanotech in case you want to have more information on DDR1 supplier

Posted on 2014-09-11 11:29:53