- Joined
- Oct 12, 2016
- Messages
- 22,913
Since some people run into problems with four RAM modules on modern MSI mainboards, i wanted to explain the reasons behind that, and why two modules are often superior. The main reason lies in the way the memory slots are connected to the memory controller, which is inside the CPU. So the first explanation is about:
1) RAM slot layout
All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.
Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.
Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.
Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.
This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.
On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.
Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.
Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.
With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.
Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.
2) Single-rank vs. dual-rank
Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.
The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?
It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.
What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:
- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.
Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.
Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz
"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.
With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.
The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.
With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz
When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)
Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz
However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.
And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.
The main takeaway from all this for DDR5:
Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).
This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.
Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.
Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)
But in some cases, buying different RAM might be the best solution.
3) Amount of RAM
For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.
Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.
64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.
We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.
Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 4x 32 GB - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use
These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.
The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.
Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).
As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.
3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?
First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.
Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).
Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.
Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).
4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs
On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.
An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.
The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.
For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).
Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.
Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.
Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.
The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.
Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".
5) RAM stability testing
Memtest86 Free from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).
It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.
Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.
Example of unstable RAM (found after three minutes already):
Any errors are not acceptable, meaning, something about the RAM configuration has to be changed so it passes without errors.
This example screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.
Now, aimed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.
To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":
After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):
sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest
Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600
This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.
Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).
Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.
Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).
A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/
Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.
It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.
An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.
So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.
If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.
Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.
My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS
Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal
1) RAM slot layout
All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.
Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.
Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.
Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.
This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.
On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.
Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.
Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.
With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.
Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.
2) Single-rank vs. dual-rank
Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.
The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?
It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.
What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:
- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.
Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.
Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz
"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.
With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.
The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.
With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz
When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)
Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz
However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.
And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.
The main takeaway from all this for DDR5:
Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).
This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.
Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.
Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)
But in some cases, buying different RAM might be the best solution.
3) Amount of RAM
For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.
Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.
64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.
We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.
Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 4x 32 GB - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use
These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.
The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.
Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).
As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.
3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?
First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.
Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).
Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.
Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).
4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs
On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.
An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.
The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.
For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).
Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.
Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.
Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.
The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.
Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".
5) RAM stability testing
Memtest86 Free from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).
It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.
Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.
Example of unstable RAM (found after three minutes already):
Any errors are not acceptable, meaning, something about the RAM configuration has to be changed so it passes without errors.
This example screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.
Now, aimed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.
To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":
After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):
sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest
Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600
This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.
Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).
Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.
Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).
A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/
Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.
It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.
An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.
So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.
If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.
Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.
My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS
Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal
Last edited: