RAM explained: Why two modules are better than four / single vs. dual-rank / stability testing

citay

Pro
SERGEANT
Joined
Oct 12, 2016
Messages
22,913
Since some people run into problems with four RAM modules on modern MSI mainboards, i wanted to explain the reasons behind that, and why two modules are often superior. The main reason lies in the way the memory slots are connected to the memory controller, which is inside the CPU. So the first explanation is about:


1) RAM slot layout

All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.

Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.

Daisy Chain.jpg


Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.

memory-layout.gif


Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.

This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.

On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.

Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.

Slots.png


Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.

With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.

Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).

topology2.jpg
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.


2) Single-rank vs. dual-rank

Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.

The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?

It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.

What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:

- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.

Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.

Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz

"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.

With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.

The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.


With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz

When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)

Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz

However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.

And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.

The main takeaway from all this for DDR5:

Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).

This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.

Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.

Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)

But in some cases, buying different RAM might be the best solution.


3) Amount of RAM

For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.

Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.

64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.

We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.


Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 4x 32 GB - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use

These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.

The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.

Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).

As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.



3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?

First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.

Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).

Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.

Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).


4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs

On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.

An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.

BIOS OC.png


The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.

For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).

Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.

Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.

Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.

divider.png


The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.

Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
Gear1.jpg

And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
AMD latencies.png


With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".


5) RAM stability testing

Memtest86 Free
from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).

It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.


Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.

Example of unstable RAM (found after three minutes already):

Screenshot.png


Any errors are not acceptable, meaning, something about the RAM configuration has to be changed so it passes without errors.
This example screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.

Now, aimed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.


To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":

0*N8OWBM7IUXaCsH7C.jpg


After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):

sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest

Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600

This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.


Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).

Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.

memtest_1.png


Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).


A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/

Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.

linpack.png


It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.



An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.

So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.

If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.



Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.


My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS


Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal 😉
 
Last edited:
my reply was about the minimal difference between 2 and 4 slots boards, which might be true for ddr5, but not so much for ddr4 (on ryzen),

It's not MIGHT, because this is not my opinion (there are too many opinions in this topic :biggrin:)
It's a fact present in all the Intel and AMD datasheets.
Again take a look here: https://forum-en.msi.com/index.php?...14th-gen-cpus-raptor-lake-ddr5-support.373119
Even for default voltages ( VMEM = VDDQ = VDD2 = 1.1V ) the difference is only 400MHz.
For higher voltages (XMP) the difference is only 100-200 MHz.
And if you use insane memory & IMC voltages (needed for the 7600-8400 range), that 200MHz difference means nothing.

On the other hand, you cannot compare Intel Gen 13/14 with Ryzen 7XXX.
For Intel this is the 2nd DDR5 CPU generation, for AMD is the first one (they are not mature enough).
:beerchug:
 
    • Two Modules > Four: Using two RAM sticks in dual-channel configuration offers better performance and stability compared to four, as four sticks may lead to issues at higher speeds.
    • Single vs. Dual-Rank: Dual-rank RAM can enhance performance slightly but may present compatibility challenges when using four sticks.
    • Stability Testing: It's essential to test new RAM with tools such as MemTest86 to confirm there are no errors or crashes.

  • i hop this will help you
 
There is no confusion at all.
And don't buy anything you find on the internet!
These days all kind of (young) people share their "wisdom" over the internet and the result is terrible for all.
You can find the Intel official charts and tables here: https://forum-en.msi.com/index.php?...4th-gen-cpus-raptor-lake-ddr5-support.373119/
So there are 2 different things involved here:
- SPC (Slots Per Channel)
- DPC (DIMMs Per Channel)
Obviously the motherboards with 1 SPC (2 memory slots) are faster than those with 2 SPC (4 memory slots).
And the reason is very simple: more memory slots ---> more wires ---> more electrical noise
And more electrical noise leads to lower signal quality, more CRC errors, so obviously lower speed at the end.
BUT ...
No matter what others say, when it comes to memory speed, the difference between a motherboard with 2 memory slots and a motherboard with 4 memory slots is minimal !!!
A top 2 slots motherboard is guaranteed for DDR5-8000
A top 4 slots motherboard is guaranteed for DDR5-7800 (Example: https://www.msi.com/Motherboard/MEG-Z790-ACE-MAX/Specification )

The bottom line: when it comes to memory speed, the motherboard is almost irrelevant.
The main limitation here comes from the CPU IMC.
In case of XMP (IMC overclocking, undertiming and overvolting), the difference between 2 x single-rank and 4 x dual-rank can reach 2000MHz !!!
See again the specs for MSI Z790 Ace Max.
Thanks for your reply.
But my statement was referring to the Intel documents. Here is a snippet from the Intel 12th gen processor manual, vol 1:
12th_Gen_Memory_SKU.jpg


Have a look at "2DPC" for DDR5. There, values are specified for "1 DIMM" and "2 DIMMs 1R/2R",.
If "2DPC" is actually meant to mean "populate both slots belonging to a channel", why do they specify a value for "1 DIMM", and then values for "2 DIMM 1R" and "2DIMMs 2R"?
The "1 DIMM" value under "2DPC" seems to make only sense if they have "2DPC" refer to a 4 slot mainboard layout, otherwise would it not already been covered by the data under "1DPC"?
What am I missing here?
 
Thanks for your reply.
But my statement was referring to the Intel documents. Here is a snippet from the Intel 12th gen processor manual, vol 1:
View attachment 192927

Have a look at "2DPC" for DDR5. There, values are specified for "1 DIMM" and "2 DIMMs 1R/2R",.
If "2DPC" is actually meant to mean "populate both slots belonging to a channel", why do they specify a value for "1 DIMM", and then values for "2 DIMM 1R" and "2DIMMs 2R"?
The "1 DIMM" value under "2DPC" seems to make only sense if they have "2DPC" refer to a 4 slot mainboard layout, otherwise would it not already been covered by the data under "1DPC"?
What am I missing here?

That "Configuration" line is for Slots Per Channel (SPC).
In the "Maximum Frequency" line you can find the DPC (1 or 2) info.
Juniors everywhere ... :biggrin:
 
I dunno guys, I just bought two additional sticks to my 2 overclocked to 6400 (originally 5600) XPG Lancer sticks and all four sticks work just fine with that overclocking setup with no issues at all on z790 Tomahawk Wifi.
 
I dunno guys, I just bought two additional sticks to my 2 overclocked to 6400 (originally 5600) XPG Lancer sticks and all four sticks work just fine with that overclocking setup with no issues at all on z790 Tomahawk Wifi.
Give us Real number that show the main information in the thread, which is well documented an proven, instead of "I have 4 and they work great". Of course 4 will work, but is the access rate actually faster. I run 4 but since I'm not predominately gaming, it's what right for my use case.
 
Give us Real number that show the main information in the thread, which is well documented an proven, instead of "I have 4 and they work great". Of course 4 will work, but is the access rate actually faster. I run 4 but since I'm not predominately gaming, it's what right for my use case.
I mean, maybe this important and "well documented" number is lower in my case too, but does it matter, if my gaming fps is the same? I don't feel like I lost any frames in Star Citizen and this is all that matters to me.
 
There can be success stories like this, especially if the CPU has a particularly good IMC quality (it's a silicon lottery, similar to the IA cores = actual CPU cores, but seperate from those). We see several examples of problems with four modules every week here on the forum, but that does not exclude someone running such a configuration with more success, especially if it's on 14th gen / 700-series, probably 4x 16 GB (the least taxing of the four-module configuations), and maybe with a strong IMC. After all, they do the same for the memory QVL: Just binning CPUs until they find one with a very good IMC quality, that's how they can show "A-OK" for a bunch of kits with enthusiast-grade speeds or four-module combinations.
 
Hey all,
I'm a little overwhelmed by all this, but I think I'm getting the basics of what @citay has said.
I'm similar to @truthntraditio156902db was talking about a couple of pages back in that I use a lot of ram for my workflow.
I just splashed out on a new system, rocking the latest "9800X3D" on an "MSI MPG X870E Carbon Wifi" board.
I happened to purchase before reading this, 2 x "Corsair Vengeance DDR5 64GB (2x32GB) 6000MHz CL30 AMD EXPO (CMK64GX5M2B6000Z30), totalling 4 sticks of ram!
Now, I literally just got all the parts these past few weeks and in the Black Friday deals this weekend, but haven't put it together yet as I'm waiting for my GPU to arrive tomorrow. But from what I'm reading here, I'm worried about my configuration now.
I was reading elsewhere that the memory I got is perfect for the "sweet spot" for the EXPO settings, but now I'm confused.
With 4 sticks, will it drop to the motherboard's lowest memory settings on their page, which is 4800+ MT/s to be stable? Or will it start crashing a lot? I'm a bit worried!
Here is what the MSI motherboard page states...

4x DDR5 UDIMM, Maximum Memory Capacity 256GB
Memory Support DDR5 8400 - 5600 (OC) MT/s / 5600 - 4800 (JEDEC) MT/s
Ryzen™ 9000 Series Processors max. overclocking frequency:
• 1DPC 1R Max speed up to 8400+ MT/s
• 1DPC 2R Max speed up to 6400+ MT/s
• 2DPC 1R Max speed up to 6400+ MT/s
• 2DPC 2R Max speed up to 4800+ MT/s

Will it be just fine, or will I have to change things? And what should I have to check to make sure it is all fine?
I'm unsure what to do as there is a limited window with the current sales and returns, and I've also reached my budget limit.
Any help or advice would be greatly appreciated.
Many thanks.
 
I happened to purchase before reading this, 2 x "Corsair Vengeance DDR5 64GB (2x32GB) 6000MHz CL30 AMD EXPO (CMK64GX5M2B6000Z30), totalling 4 sticks of ram!
Now, I literally just got all the parts these past few weeks and in the Black Friday deals this weekend, but haven't put it together yet as I'm waiting for my GPU to arrive tomorrow. But from what I'm reading here, I'm worried about my configuration now.
I was reading elsewhere that the memory I got is perfect for the "sweet spot" for the EXPO settings, but now I'm confused.

Sweet spot, yes, but only with two modules. With four modules, you make things a lot more difficult for the memory system, especially on AM5, enough to likely make this sweet spot well out of reach.

With 4 sticks, will it drop to the motherboard's lowest memory settings on their page, which is 4800+ MT/s to be stable? Or will it start crashing a lot?

Either or. You put so much more stress on the memory system that trying to use close to DDR5-6000 will likely be unstable, so the speed will have to be dropped quite significantly to make it stable, and you end up with DDR5-4800 (or less, depending).

Will it be just fine, or will I have to change things? And what should I have to check to make sure it is all fine?
I'm unsure what to do as there is a limited window with the current sales and returns, and I've also reached my budget limit.

Seeing how you got an X3D CPU, surely this system is meant more for gaming? Then i would just pack up one of the RAM kits and return it, and use one kit of 2x 32 GB in slots A2 and B2 only. This is already complete overkill for gaming, as most games won't really make use of more than 16 GB (since a lot of them also have to run on the consoles, which are all equipped with 16 GB maximum). 64 GB total is more than plenty for the foreseeable future, for gaming it will most likely remain overkill for years to come.

Now, if you happen to have some professional workloads that require more RAM than 64 GB, you could think about returning both RAM kits and opting for a kit of 2x 48 GB. Again, the idea here is to use two modules only, which is much easier for the memory system to deal with. Then you can use nicer DDR speeds again.
 
Firstly, thank you so much @citay for your incredibly fast response, I really appreciate it and the advice you've given so far.

Seeing how you got an X3D CPU, surely this system is meant more for gaming?
I got the 9800X3D as a stopgap over the holidays for yeah, primarily gaming, but then I'm going in on the 9950X3D when it comes out early next year for the workflow stuff... I should hopefully be able to get most of my money back on the 9800X3D due to demand! 🤞😂

Now, if you happen to have some professional workloads that require more RAM than 64 GB, you could think about returning both RAM kits and opting for a kit of 2x 48 GB. Again, the idea here is to use two modules only, which is much easier for the memory system to deal with. Then you can use nicer DDR speeds again.
Yeah, I've got heavy adobe workloads that I do, hence the wanting for more RAM.
But which 2x48 kit would you recommend then with the combo I have? I've only seen ones with lower MT/s and much higher CL than the ones I already have.
Or when it comes to it, it doesn't matter so much about those numbers because the gains are negligible at that tier?

Again, thanks for the advice.
 
But which 2x48 kit would you recommend then with the combo I have? I've only seen ones with lower MT/s and much higher CL than the ones I already have.

The 2x 48 GB kits, due to the module size, it's a bit more difficult to get them up to speed. That CMK96GX5M2B6000Z30 kit you picked out there, it's the top 2x 48 GB EXPO kit available, but i don't like its 1.4V EXPO profile, that's enthusiast-grade, i would like it a bit less stressed. And as you suspected, the differences between kits are not as big as the timings suggest, for example just by tweaking a single RAM timing called tREFI, you would probably have bigger gains than just using a kit with a better EXPO profile, also see here.

On that shop website, their selection is very limited, but normally i would go for something like G.Skill Flare X5, F5-5600J4040D48GX2-FX5. Of course, you could also get the Corsair kit. But DDR5-6000 CL30 for 48 GB modules is very tight indeed, i'm not entirely sure how the memory system copes with that. The MSI AMD 800-series boards also seem to be in very immature state of BIOS development, just look at this support site, all beta versions, when can you ever see that... So if worst comes to worst, that kit might need some manual settings to run properly, deviating a bit from EXPO. Hard to say in advance.
 
Hey all,
I'm a little overwhelmed by all this, but I think I'm getting the basics of what @citay has said.
I'm similar to @truthntraditio156902db was talking about a couple of pages back in that I use a lot of ram for my workflow.
I just splashed out on a new system, rocking the latest "9800X3D" on an "MSI MPG X870E Carbon Wifi" board.
I happened to purchase before reading this, 2 x "Corsair Vengeance DDR5 64GB (2x32GB) 6000MHz CL30 AMD EXPO (CMK64GX5M2B6000Z30), totalling 4 sticks of ram!
Now, I literally just got all the parts these past few weeks and in the Black Friday deals this weekend, but haven't put it together yet as I'm waiting for my GPU to arrive tomorrow. But from what I'm reading here, I'm worried about my configuration now.
I was reading elsewhere that the memory I got is perfect for the "sweet spot" for the EXPO settings, but now I'm confused.
With 4 sticks, will it drop to the motherboard's lowest memory settings on their page, which is 4800+ MT/s to be stable? Or will it start crashing a lot? I'm a bit worried!
Here is what the MSI motherboard page states...

4x DDR5 UDIMM, Maximum Memory Capacity 256GB
Memory Support DDR5 8400 - 5600 (OC) MT/s / 5600 - 4800 (JEDEC) MT/s
Ryzen™ 9000 Series Processors max. overclocking frequency:
• 1DPC 1R Max speed up to 8400+ MT/s
• 1DPC 2R Max speed up to 6400+ MT/s
• 2DPC 1R Max speed up to 6400+ MT/s
• 2DPC 2R Max speed up to 4800+ MT/s

Will it be just fine, or will I have to change things? And what should I have to check to make sure it is all fine?
I'm unsure what to do as there is a limited window with the current sales and returns, and I've also reached my budget limit.
Any help or advice would be greatly appreciated.
Many thanks.
I built a Z790 13900KF rig for film scoring and orchestration, using 128GB Ram (4x32GB) Corsair Vengeance 5600 DDR5, and I even spent more the the CL28 version because I didn't know any of this stuff at the time. My Carbon WiFi runs 4000 stock and XMP is supposed to take it to 5600 but that has never worked. I played around with MSI's Try It in the BIOS and landed on 5200 CL34, and that has been stable for quite a while. Maybe give that a whirl when you get your build going.

Never thought DDR5 was going to be such a headache... ;)
 
I had 4sticks on DDR6000 running stable dual rank on a 13600K, before I took 2 out for my AMD build. Now I am looking to buy some single ranks for my AMD 9000 series on a 670e Carbon. Kingston furry Beast took my attention. Someone have 4 modules working at high speed?
 
The 2x 48 GB kits, due to the module size, it's a bit more difficult to get them up to speed. That CMK96GX5M2B6000Z30 kit you picked out there, it's the top 2x 48 GB EXPO kit available, but i don't like its 1.4V EXPO profile, that's enthusiast-grade, i would like it a bit less stressed. And as you suspected, the differences between kits are not as big as the timings suggest, for example just by tweaking a single RAM timing called tREFI, you would probably have bigger gains than just using a kit with a better EXPO profile, also see here.

On that shop website, their selection is very limited, but normally i would go for something like G.Skill Flare X5, F5-5600J4040D48GX2-FX5. Of course, you could also get the Corsair kit. But DDR5-6000 CL30 for 48 GB modules is very tight indeed, i'm not entirely sure how the memory system copes with that. The MSI AMD 800-series boards also seem to be in very immature state of BIOS development, just look at this support site, all beta versions, when can you ever see that... So if worst comes to worst, that kit might need some manual settings to run properly, deviating a bit from EXPO. Hard to say in advance.
Unfortunately, I live in the UK, and I have limited choices. Corsair seems to be the dominant seller at that tier.
Do you have any links to any articles or guides on how to manually tweak settings in order to run memory smoothly at max capacity?
Thanks again for all your help with this @citay


And @PaulieDC I think I'm going to go for the 2 sticks of 48GB just to ease my mind... I think 96GB should be enough for me really at the end of the day. But thanks for sharing your experience.
 
Hum, UK... I would think you have enough retailers or am I wrong. Are you planning to buy or you allready did?

I would not gamble on getting the most of your money back. It's still second hand and evrybody head is up in buying a CPU. Something else to keep in mind is when you add more cores, you produce more heat, which activate therminal trottling maybe.
 
Do you have any links to any articles or guides on how to manually tweak settings in order to run memory smoothly at max capacity?

With 2x 48 GB you'd be in a much better starting place. Sometimes the RAM modules will have a second EXPO/XMP profile available with slightly lower speed and/or looser timings, in case the first profile causes problems. If you find that EXPO (first/only profile) is unstable, you can basically do something similar yourself, if it lacks a second profile. For example lower "DRAM Frequency" by hand to DDR5-5600 or so (with EXPO still enabled), or there's "MemoryTryIt!" presets you can try. But with any luck, EXPO will work with the 2x 48 GB kit.
 
With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".
Most variables are 32bit and GPU's like Nvidia 4000 series uses also 32bit cores on a 64bit machine, So this is for windows I suppose and 64bit software... So if there is a 64bit calculation needed, it will take longer in 16bit memory, because it needs to go to that range twice at two modules at the same time in two channels from the CPU to the RAM. On the other hand, will it be faster to write 4 variables of 32bit in a 64bit memory than 4 variables of 32bit in a 32bit memory? Knowing there is an instruction set in the cpu to handle 64bit so it doesn't need to clock down. And what about the GPU using that memory with only two channels available? I can think in case of a 32GB and 64GB with the same timings, the 64GB will be a little bit slower, because of a longer key to access that memory. I personally see a big difference between 6 and 8 cores running Unreal Engine 5 on 2x16GB (recommended 64GB).
 
DDR5 splits the memory module into two independent 32-bit addressable subchannels to increase efficiency and lower the latencies of data accesses for the memory controller. The data width of the DDR5 module is still 64-bit. However, breaking it down into two 32-bit addressable channels increases overall performance.

This has nothing to do with x86 vs x64 (32 bit and 64 bit) software within the OS. For the RAM it's all done in hardware between the DDR5 memory controller inside the CPU and the RAM modules, the OS is not involved in it.
 
Back
Top