RAM explained: Why two modules are better than four / single vs. dual-rank / stability testing

citay

Pro
SERGEANT
Joined
Oct 12, 2016
Messages
25,634
Since some people run into problems with four RAM modules on modern MSI mainboards, i wanted to explain the reasons behind that, and why two modules are often superior. The main reason lies in the way the memory slots are connected to the memory controller, which is inside the CPU. So the first explanation is about:


1) RAM slot layout

All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.

Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.

Daisy Chain.jpg


Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.

memory-layout.gif


Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.

This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.

On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.

Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.

Slots.png


Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.

With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.

Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).

topology2.jpg
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.


2) Single-rank vs. dual-rank

Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.

The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?

It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.

What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:

- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.

Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.

Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz

"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.

With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.

The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.


With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz

When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)

Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz

However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.

And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.

The main takeaway from all this for DDR5:

Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).

This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.

Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.

Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)

But in some cases, buying different RAM might be the best solution.


3) Amount of RAM

For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.

Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.

64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.

We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.


Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 2x 64 GB if possible - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use

These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.

The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.

Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).

As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.



3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?

First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.

Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).

Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.

Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).


4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs

On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.

An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.

BIOS OC.png


The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.

For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).

Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.

Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.

Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.

divider.png


The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.

Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
Gear1.jpg

And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
AMD latencies.png


With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".


5) RAM stability testing

Memtest86 Free
from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).

It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.


Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.

Example of unstable RAM (an error found almost immediately):

Screenshot.png


Any errors are not acceptable. Meaning, something about the RAM configuration has to be changed, so it passes without errors.
The above screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.
That will detect errors quicker, and should be selected in the settings here:

TM5 profile.png


Now, armed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.

I will keep the following overview of other RAM-related stress tests in here, but usually, with the two tools from above, it's enough for most occasions, except for RAM-tweaking enthusiasts.

To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":

0*N8OWBM7IUXaCsH7C.jpg


After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):

sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest

Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600

This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.


Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).

Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.

memtest_1.png


Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).


A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/

Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.

linpack.png


It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.



An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.

So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.

If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.



Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.


My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS


Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal 😉
 
Last edited:
I theory I understand now, but the hardest part for me is to get to transfer it in to practice.


If I gather the information from the Gskill datasheet, I see that it is a module of 16GB and has 1 rank per channel. I have two memory channels on my CPU available, knowing that from the AMD documentation.

1732653510106.png


If I look at CPUZ It seems is a 32bit module... So I have a 2channels x 1R 32bit running on 2994.1MHz per channel? In other words, this 16GB is a single rank module? And the divider is 1:1 and not running on 6000MHz because the CPU has two channels, so ddr6000/2=3000Mhz and will be faster than running on 1channel on 6000MHz?

And if I would step over to this module...


It is still 32bit because it is DDR5, but will have two blocks(2R) of 32bit per module for 1 channel of the CPU. So technically, with the same timings, it can store more data in the same time than the 16GB module (Not looking at the total size of module) if it is running too on 3000MHz per channel?
 
Last edited:
I theory I understand now, but the hardest part for me is to get to transfer it in to practice.

I can understand, because it can be confusing, and CPU-Z is not helping matters.

If I gather the information from the Gskill datasheet, I see that it is a module of 16GB and has 1 rank per channel. I have two memory channels on my CPU available, knowing that from the AMD documentation.

Yes. With DDR5, all 16 GB and 24 GB modules are single-rank (how the memory module is organized internally), and the 32 GB and 48 GB modules are dual-rank.

1732653510106.png


If I look at CPUZ It seems is a 32bit module... So I have a 2channels x 1R 32bit running on 2994.1MHz per channel? In other words, this 16GB is a single rank module?

CPU-Z is just confusing people at this point. For a couple versions, CPU-Z showed "4x 32-bit" for dual-channel DDR5 (since each DDR5 RAM module also has two internal channels, as we mentioned), but since many people didn't know that fact, they have gone back to showing "2x 32-bit" to avoid confusion. Which of course adds confusion for those who got used to their "4x 32-bit" from before. And before all that, for a while they even showed "Quad channel" with DDR5, which was totally confusing of course! They want to be too exact for their own good. Nobody would (or should) call it quad-channel on a dual-channel board. I tried to explain it here in more detail before. It's better just to look at the HWinfo Summary window, that puts it in plain text, it should show "Dual-channel".

And the divider is 1:1 and not running on 6000MHz because the CPU has two channels, so ddr6000/2=3000Mhz and will be faster than running on 1channel on 6000MHz?

Another common misconception. The RAM is not running at 6000 MHz, it's running at 6000 MT/s (MegaTransfers/s). The actual frequency is 3000 MHz, then x2 because of Double Data Rate, you end up at DDR5-6000.

It is still 32bit because it is DDR5, but will have two blocks(2R) of 32bit per module for 1 channel of the CPU. So technically, with the same timings, it can store more data in the same time than the 16GB module (Not looking at the total size of module) if it is running too on 3000MHz per channel?

Dual-rank modules are a bit faster than single-rank modules, but not to the point where you would seek out a kit of dual-rank modules just to get that tiny improvement. As i write in the first post: "Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules!"

Usually, with the single-rank modules (16 and 24 GB ones), you can have a better profile (XMP/EXPO), with higher speeds and tighter timings, and it has a lower chance of causing problems. With the dual-rank modules, you will not as easily find such a high-spec profile, and it also has a higher chance of causing problems, because it stresses the memory system more. So for most people, 2x 16 GB DDR5 with decent speed and decent timings will be the sweet spot. Plus you'd need professional workloads to require more RAM. I know a few games here and there start to have 64 GB in their recommended specs, but it's nonsense for all i know, most games don't even use more than 16 GB. And for going beyond 32 GB total, i have yet to see anything showing an advantage of that for games. Unless you have tons of stuff open in the background which needs a lot of RAM, maybe.
 
Last edited:
Thanks. Sorry about the MT/s, my mistake. And yes this was not the latest version of CPUZ. For the amount of memory it is indeed not needed to play games. I think that amount of RAM and cores is recommended for real time rendering, depends on code language and etcetera. Much less pre-compilation.
 
Last edited:
Dual-rank modules are a bit faster than single-rank modules, but not to the point where you would seek out a kit of dual-rank modules just to get that tiny improvement. As i write in the first post: "Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules!"
I believe you, but the numbers tell something else, I think.

32GB

TimingsCAS Latency (CL)30
RAS-to-CAS-Delay (tRCD)40
RAS-Precharge-Time (tRP)40
Row-Active-Time (tRAS)96


vs 16GB


Timings
CAS Latency (CL)30
RAS-to-CAS-Delay (tRCD)38
RAS-Precharge-Time (tRP)38
Row-Active-Time (tRAS)96

Yes it will stress the IMC more, but EXPO is there, and maybe it needs to be more fine-tuned depending on the type of the IMC. It's trail and error and maybe waiting before someone that took the risk with the same IMC and stability testing 'to be sure' to get the full speed (2 channels on 3000MHz) on a dual rank memory with 2- and imagine four modules at DDR5600 or DDR5200, not sure yet what will be the fastest... Probably DDR5200 on a chipset X670e. But they are still too expensive and will wait for this another year. Did it once with my Intel, and I will sit this one out.
 
Last edited:
I believe you, but the numbers tell something else, I think.

Not if you look at all the numbers. Let's just look at the two kits you linked before:

kits.png


So for slightly worse timings on the 32GB-sized modules, they need to go from a pretty high voltage to an even higher one. So this is not directly comparable. If you want to compare like-for-like, you have to compare kits with the same DRAM Voltage in their XMP/EXPO, then you really know what the step from single-rank to dual-rank means. And by the way, the BIOS should also raise certain IMC-related voltages more with the 2x 32GB kit.

'to be sure' to get the full speed (2 channels on 3000MHz) on a dual rank memory with 2- and imagine four modules at DDR5600 or DDR5200, not sure yet what will be the fastest...

Four modules are always a bad idea, no matter what. Two dual-rank modules purely for the very slight performance advantage, when you don't need the extra capacity, the money is already spent better elsewhere. But four single-rank modules i would never recommend, when you could get two dual-rank ones and avoid the penalty of using four modules in a board with daisy-chain RAM slot layout.
 
Yeah, penalty is penalty here we say. Took a look back at my Intel.

1732740012564.png
1732742318403.png


Now I see this was set long time ago as dual rank to get 4 RAM modules stable working (Done myself in bios, divider (1:2)). That's why I see now 4x (2 channels with gear2) 32bit.... But the DRAM frequency is the same (2x mem controller at 1500MHz per channel vs 1x @3000MHz per channel), with manual slight increase of tras and trc, but a lower TrFc and some higher WR timings too. And again those timings work for me with 4 sticks 16GB.

And this for my AMD only tuned with the EXPO profile and stability test for 2 sticks
1732741750058.png
1732742328586.png


Note: that the Uncore frequency is more dynamic on my Intel. (800-4490MHz) vs AMD fix 2994MHz.
 

Attachments

  • intel 13gen.png
    intel 13gen.png
    11.9 KB · Views: 86
  • AMD9000serie.png
    AMD9000serie.png
    37.6 KB · Views: 85
Last edited:
Some feedback on a new CPU i have now,I replaced the 13600K with a LOCKED 13900K (kept the air cooler Noctua D15) and having the best results on 4.8GHz to 5.3G in the first minutes then after 5min it runs between @4.6GHz and 4.8GHz and still boost to 5.3GHz at some moments.

1735622513345.png

1735624375529.png

Tried to set it to gear 1, but couldn't achieve it. Have the same settings for 4 sticks like I had for the 13600K.
4x (2 channels with gear2) 32bit, But the DRAM frequency is the same (2x mem. controller at 1500MHz per channel vs 1x @3000MHz per channel), Will it run at the same speed when I add 2more sticks?
I think it will.... knowing the recommended MT/s are higher for the 13900K. Took a look back at the 13600K (see picture below)

1735623497544.jpeg


After all I gain 10000 points on Cinebench R32 with the 13900K compared with the 13600K.
And what I want to say is... I don't see any difference using 2 sticks or 4 sticks on a 13Gen running at DDR6000 in opposite I hear here and there with Intel CPU's. Yeah, and maybe I had luck they work perfectly together. Anyway, have seeing many people issues with memory this year, hope it will help someone. I wish you all a prosperous 2025!
 

Attachments

  • 1735623286230.png
    1735623286230.png
    495.8 KB · Views: 77
Last edited:
For DDR5, Gear2 mode is normal. Looking at your timings in the BIOS, you should tune tREFI (Refresh Interval), try setting it to 32768, so it will do fewer refresh cycles and have more time for actual RAM commands in between. That should boost RAM performance nicely, and is generally considered the safe way of tuning this (as opposed to just maxing it out to 65536).
 
Last edited:
Hello-

My name is Brian and I am an amatuer astrophotographer. In the past, I have usually hired people to build my computers that I use for processing my astrophotography work. However, I decided this time to try and build my own PC. The primary program that I use is PixInsight (PI). PI is a very demanding scientific stacking and processing program that relies heavily on RAM.

The board that I have chosen for my new build is the- MAG X870 Tomahawk; Chip AMD Ryzen 9 9950X and for the RAM I purchased 2 boxes with 2 pairs of 32 GB od DDR5 RAM at 5600mgz for a total of 128 GB.

In preperation for doing the build, I was reading online that setting up a board where the plan is to use all four RAM slots can be tricky and that I would be better off with just using 2 slots or doing something called a "daisy chain".

My current build has 32 GB of RAM which is pushing the limits of my system. It generally takes my computer 1:30 minutes to process and stack 5 hours with of imaging time.

I was wondering if I could get some advice or feedback for this new build as I am now processing larger data sets and require substantial increase in RAM.

PixInsight System Requirements: https://pixinsight.com/sysreq/

Processor​

  • Minimum required processor: Intel Core i5 or equivalent. Current versions of PixInsight require a CPU with AVX2 and FMA3 instruction support on Linux and Windows.
  • Minimum practical processor: An AMD Ryzen or Ryzen Threadripper CPU, or an Intel Core i9/i10/i11 or Xeon, with a minimum of 8 processor cores.
  • Minimum recommended processors: 16-core AMD Ryzen Threadripper 3900 / 5900 / 7900 series, AMD Ryzen 9 5950X, 7950X or 9950X.
By default, the PixInsight platform will use all processors and processor cores available on your machine. There are specific preferences settings to control the maximum number of processors used by PixInsight, along with other parallel execution options such as thread execution priority, thread processor affinity, etc.

See the PixInsight Benchmark community-driven project for up-to-date information on hardware performance with current PixInsight versions.

RAM​

  • Minimum required amount of RAM: 16 GiB on a 64-bit machine and operating system (* see note below).
  • Minimum practical amount of RAM: 64 GiB or 128 GiB, depending on the amount of data to be processed, on a 64-bit machine and operating system.
  • Minimum recommended amount of RAM: From 128 GiB to 1 TiB or more, depending on the sizes and number of images to process, on a 64-bit machine and operating system. The recommended minimum for production work is 64 GiB of RAM, but 128 GiB is advisable considering the resolution of modern CMOS cameras and the large amount of frames that must be acquired due to increasing light pollution conditions.
Being a 64-bit application, PixInsight has no practical memory limit. It will use all memory available to applications, and will cause the operating system to allocate virtual memory on disk when necessary and possible.

Note: Although the application can be executed on machines with 16 GiB of RAM, or perhaps even 4-6 GiB, the minimum supported amount of RAM for practical image processing tasks with raw data acquired using current equipment is 32 GiB.
 
Hello-

My name is Brian and I am an amatuer astrophotographer. In the past, I have usually hired people to build my computers that I use for processing my astrophotography work. However, I decided this time to try and build my own PC. The primary program that I use is PixInsight (PI). PI is a very demanding scientific stacking and processing program that relies heavily on RAM.

The board that I have chosen for my new build is the- MAG X870 Tomahawk; Chip AMD Ryzen 9 9950X and for the RAM I purchased 2 boxes with 2 pairs of 32 GB od DDR5 RAM at 5600mgz for a total of 128 GB.

In preperation for doing the build, I was reading online that setting up a board where the plan is to use all four RAM slots can be tricky and that I would be better off with just using 2 slots or doing something called a "daisy chain".

My current build has 32 GB of RAM which is pushing the limits of my system. It generally takes my computer 1:30 minutes to process and stack 5 hours with of imaging time.

I was wondering if I could get some advice or feedback for this new build as I am now processing larger data sets and require substantial increase in RAM.

PixInsight System Requirements: https://pixinsight.com/sysreq/

Processor​

  • Minimum required processor: Intel Core i5 or equivalent. Current versions of PixInsight require a CPU with AVX2 and FMA3 instruction support on Linux and Windows.
  • Minimum practical processor: An AMD Ryzen or Ryzen Threadripper CPU, or an Intel Core i9/i10/i11 or Xeon, with a minimum of 8 processor cores.
  • Minimum recommended processors: 16-core AMD Ryzen Threadripper 3900 / 5900 / 7900 series, AMD Ryzen 9 5950X, 7950X or 9950X.
By default, the PixInsight platform will use all processors and processor cores available on your machine. There are specific preferences settings to control the maximum number of processors used by PixInsight, along with other parallel execution options such as thread execution priority, thread processor affinity, etc.

See the PixInsight Benchmark community-driven project for up-to-date information on hardware performance with current PixInsight versions.

RAM​

  • Minimum required amount of RAM: 16 GiB on a 64-bit machine and operating system (* see note below).
  • Minimum practical amount of RAM: 64 GiB or 128 GiB, depending on the amount of data to be processed, on a 64-bit machine and operating system.
  • Minimum recommended amount of RAM: From 128 GiB to 1 TiB or more, depending on the sizes and number of images to process, on a 64-bit machine and operating system. The recommended minimum for production work is 64 GiB of RAM, but 128 GiB is advisable considering the resolution of modern CMOS cameras and the large amount of frames that must be acquired due to increasing light pollution conditions.
Being a 64-bit application, PixInsight has no practical memory limit. It will use all memory available to applications, and will cause the operating system to allocate virtual memory on disk when necessary and possible.

Note: Although the application can be executed on machines with 16 GiB of RAM, or perhaps even 4-6 GiB, the minimum supported amount of RAM for practical image processing tasks with raw data acquired using current equipment is 32 GiB.
From what i can tell pure on ram performance the intel ultra series is the fastest . But if size is more important than max it out ? Yes 4 sticks run slower because of the higher strain on the memory controller. Still those new intel are better with memory speeds . Not sure this is any help . I tried ^^
 
The board that I have chosen for my new build is the- MAG X870 Tomahawk; Chip AMD Ryzen 9 9950X and for the RAM I purchased 2 boxes with 2 pairs of 32 GB od DDR5 RAM at 5600mgz for a total of 128 GB.

In preperation for doing the build, I was reading online that setting up a board where the plan is to use all four RAM slots can be tricky and that I would be better off with just using 2 slots or doing something called a "daisy chain".

There are two problems: First, DDR5, and especially on the AMD AM5 platform, is quite unhappy when you use four modules, because the RAM slots are more designed with only two modules in mind, due to the daisy-chained RAM slot layout in almost all boards nowadays. So "daisy-chain" is not something you do, it's how the RAM slots are connected, one after each other on each channel, instead of going to both slots of each channel equally/seperately. The result is, once you populate all four slots, the signal quality suffers. Additionally, the memory controller (a part inside the CPU which communicates with the RAM) now has twice the stress from twice as many modules.

Then another problem, just adding two kits of two together. Once you do this, whatever speed/timings are "promised" for the single kit go completely out the window (they are not really promised anyway, just a goal to reach under optimal conditions). To still have them valid (at least from the RAM side) with four modules, you would need to buy a kit of four modules, with an EXPO profile for AM5 (Intel has XMP, AM5 has EXPO). You will see that there are not many kits of four modules with an EXPO profile, especially not at higher RAM speeds, because it's not easy to run.

Now, for some items you said "have purchased", so now you might as well try how it runs i guess. But AM5, the new 800-series boards still have a very immature BIOS, so you will have to do plenty BIOS updates in the future to improve things, and AM5 is quite unhappy with four modules. Yes, their CPUs have clear advantages in some areas compared to Intel, but if you need a lot of RAM (128 GB or more), i would say Intel actually has the edge there, they can deal with this better. With AMD, 2x 48 GB for 96 GB total is usually the less troublesome configuration. This is the maximum you can get by using two modules only.
 
Last edited:
Hello-

My name is Brian and I am an amatuer astrophotographer. In the past, I have usually hired people to build my computers that I use for processing my astrophotography work. However, I decided this time to try and build my own PC. The primary program that I use is PixInsight (PI). PI is a very demanding scientific stacking and processing program that relies heavily on RAM.

The board that I have chosen for my new build is the- MAG X870 Tomahawk; Chip AMD Ryzen 9 9950X and for the RAM I purchased 2 boxes with 2 pairs of 32 GB od DDR5 RAM at 5600mgz for a total of 128 GB.

In preperation for doing the build, I was reading online that setting up a board where the plan is to use all four RAM slots can be tricky and that I would be better off with just using 2 slots or doing something called a "daisy chain".

My current build has 32 GB of RAM which is pushing the limits of my system. It generally takes my computer 1:30 minutes to process and stack 5 hours with of imaging time.

I was wondering if I could get some advice or feedback for this new build as I am now processing larger data sets and require substantial increase in RAM.

PixInsight System Requirements: https://pixinsight.com/sysreq/

Processor​

  • Minimum required processor: Intel Core i5 or equivalent. Current versions of PixInsight require a CPU with AVX2 and FMA3 instruction support on Linux and Windows.
  • Minimum practical processor: An AMD Ryzen or Ryzen Threadripper CPU, or an Intel Core i9/i10/i11 or Xeon, with a minimum of 8 processor cores.
  • Minimum recommended processors: 16-core AMD Ryzen Threadripper 3900 / 5900 / 7900 series, AMD Ryzen 9 5950X, 7950X or 9950X.
By default, the PixInsight platform will use all processors and processor cores available on your machine. There are specific preferences settings to control the maximum number of processors used by PixInsight, along with other parallel execution options such as thread execution priority, thread processor affinity, etc.

See the PixInsight Benchmark community-driven project for up-to-date information on hardware performance with current PixInsight versions.

RAM​

  • Minimum required amount of RAM: 16 GiB on a 64-bit machine and operating system (* see note below).
  • Minimum practical amount of RAM: 64 GiB or 128 GiB, depending on the amount of data to be processed, on a 64-bit machine and operating system.
  • Minimum recommended amount of RAM: From 128 GiB to 1 TiB or more, depending on the sizes and number of images to process, on a 64-bit machine and operating system. The recommended minimum for production work is 64 GiB of RAM, but 128 GiB is advisable considering the resolution of modern CMOS cameras and the large amount of frames that must be acquired due to increasing light pollution conditions.
Being a 64-bit application, PixInsight has no practical memory limit. It will use all memory available to applications, and will cause the operating system to allocate virtual memory on disk when necessary and possible.

Note: Although the application can be executed on machines with 16 GiB of RAM, or perhaps even 4-6 GiB, the minimum supported amount of RAM for practical image processing tasks with raw data acquired using current equipment is 32 GiB.

If you need that amount of RAM you are probably working with uncompressed files. The amount of ram you need depends more on the input data (capture device) x2 or maybe more if you work in layers and also the calculations it does. Some software have an GPU accelerator to take tasks away from the CPU, but you need to enable it, if it isn't done automatically. Automatically is always better. I agree with what here is sad before, and for sure don't go for a threadripper if you don't want to be a computer expert, it is just a tool and the results are the same. About the disk space is 1.5 to 3 times the amount of virtual only for the memory.... So you best take a huge primary drive to take the workload/OS , files,... etc. Using two drives works, and can speed up the process. (do not use a network drive to do the calculation) But isn't interesting when your PC gets older and did the relocate the TEMP folder to. But that's windows... and probably like to choose a more for a lightweight OS like a Linux destro. The fastest way is an M2 drive directly connected to the CPU. This starts I from chipset AMD 670e and Intel Z790... Tomahawk has one, Carbon has two (Newer models have more). At the end how more cores and higher frequency, how faster it will go. AI is becoming a great tool in recognizing patterns, so you maybe want to consider that too, thinking of an NPU that intel now provides, but the main task is GPU, not necessarily but runs faster, if it isn't in the cloud already.
 
Since some people run into problems with four RAM modules on modern MSI mainboards, i wanted to explain the reasons behind that, and why two modules are often superior. The main reason lies in the way the memory slots are connected to the memory controller, which is inside the CPU. So the first explanation is about:


1) RAM slot layout

All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.

Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.

Daisy Chain.jpg


Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.

View attachment 149843

Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.

This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.

On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.

Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.

View attachment 155049


Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.

With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.

Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).

View attachment 156014
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.


2) Single-rank vs. dual-rank

Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.

The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?

It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.

What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:

- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.

Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.

Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz

"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.

With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.

The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.


With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz

When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)

Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz

However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.

And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.

The main takeaway from all this for DDR5:

Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).

This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.

Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.

Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)

But in some cases, buying different RAM might be the best solution.


3) Amount of RAM

For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.

Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.

64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.

We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.


Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 4x 32 GB - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use

These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.

The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.

Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).

As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.



3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?

First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.

Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).

Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.

Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).


4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs

On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.

An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.

BIOS OC.png


The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.

For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).

Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.

Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.

Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.

View attachment 150421

The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.

Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
View attachment 158526

And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
View attachment 159007

With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".


5) RAM stability testing

Memtest86 Free
from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).

It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.


Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.

Example of unstable RAM (an error found almost immediately):

View attachment 194727

Any errors are not acceptable. Meaning, something about the RAM configuration has to be changed, so it passes without errors.
The above screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.
That will detect errors quicker, and should be selected in the settings here:

View attachment 198642

Now, armed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.

I will keep the following overview of other RAM-related stress tests in here, but usually, with the two tools from above, it's enough for most occasions, except for RAM-tweaking enthusiasts.

To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":

0*N8OWBM7IUXaCsH7C.jpg


After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):

sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest

Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600

This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.


Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).

Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.

memtest_1.png


Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).


A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/

Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.

View attachment 161818

It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.



An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.

So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.

If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.



Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.


My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS


Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal 😉

Your article is incredible.
It was like a teacher in the class!

What I picked up is, DDR5 are almost useless. Unfortunately I got 32GB (2x 16gb) and I would say, after pass the memory to 6400MHz I did not earn anything just crash.
If I would see this post before, I would not spent 30-40 EUR more buying a useless DDR5.

- DDR4 is easier and cheaper to find.
- Stability
- Better processor list
- Cheaper motherboards options
- Run almost the same as DDR5

So now I have to endure this S***t (you know) the DDR5 until my next PC!
 
Well, i wouldn't go so far as to say DDR5 is useless. The evolution of DDR technology is inevitable, and it did bring certain improvements. If i were to build a brand new system right now, and i didn't already have the kit of high-end 2x 16 GB DDR4 that i have, i would go with DDR5 as well. DDR4 can be a good choice when you 1) have a kit of nice DDR4 already that you want to re-use, or 2) want to use 4x 32 GB RAM, or 3) when the price difference (regarding board + RAM) is clearly in favour of DDR4. The latter should not be that common anymore nowadays, compared to the early days of DDR5 where it was much more expensive and DDR4 was the clear price/performance-winner.

For DDR5, it you want to stay away from trouble while still getting nice performance, i would go for DDR5-6000. Even with a DDR5-6400 kit, you can enable XMP/EXPO, but then additionally set "DRAM Frequency" to DDR5-6000 by hand, if -6400 isn't stable. You don't lose too much performance from this.

DDR4 isn't more stable per se, you can get full stability with DDR5 too. DDR5 is, however, more negatively affected when using four modules at once. Often times, that will force a very low DDR speed (especially with four big dual-rank modules like 4x 32 GB), and it can actually make it perform worse than a similar DDR4 setup. And of course, once you go too much above DDR5-6000, you can start seeing problems with stability even with two modules, especially on AMD AM5.

The thing about DDR4, it was fully developed to the maximum of the technology, both on the RAM side and on the IMC side. With DDR5, we're not there yet on either side. But that doesn't make it the worse choice automatically, if you know what to look out for.
 
DDR4 isn't more stable per se, you can get full stability with DDR5 too. DDR5 is, however, more negatively affected when using four modules at once. Often times, that will force a very low DDR speed (especially with four big dual-rank modules like 4x 32 GB), and it can actually make it perform worse than a similar DDR4 setup. And of course, once you go too much above DDR5-6000, you can start seeing problems with stability even with two modules, especially on AMD AM5.
I won't cut the wire on AM5. I see an improvement from series 5000 to 9000, still AM5 will continue... Going from DDR4800 to 6000 doesn't improve the CPU response that much. So why go for 1000 or 2000 more? DDR4 has a max speed of 3200, so that's a significant increase with DDR5. Choosing for a X3D reduce latency, so this could end up higher than the regular DDR6000. The same could be told from Intel but in another way.
 
I really wish I had found this article before I ordered/paid for my memory for this board! But now, here is where I am and if there is anyone here that can give me a suggestion.
I purchased a new computer with the MSI MAG X870 TOMAHAWK WIFI AM5 ATX with 48GB (24GBx2) CORSAIR DOMINATOR TITANIUM in slots DIMMA2 and DIMMB2. I later purchased another set of this memory from Corsair directly. When I installed the second set of DDR5, the computer would not boot to BIOS. I then swapped the locations of the DDR5’s (moved the new memory to slots DIMMA2 and DIMMB2 and the original memory to in slots DIMMA1 and DIMMB1) and then I could boot to BIOS. Memory passed Window Memory Diagnostic Tool OK. The BIOS reported the DRAM speed for “auto” the Adjusted DRAM Speed was 3600 MT/s. If I try to change this faster since the memory is advertised as 7200 MT/s, the computer will not boot to BIOS. Is the memory defective or can you recommend a setting that restores some of the speed of this memory in my computer?
 
The RAM itself is perfectly fine. But on AM5, there is almost zero chance to run that four-module configuration close to DDR5-7200. Even on Intel, you might face serious problems here, but on AM5, it's even harder to achieve. If you want to run this kind of speed on AM5, you should only use two modules.

We have several "mistakes" coming together here (that's not to put much blame on you at all, it's easy not to have known about this, and it is rather poorly communicated to the general public, otherwise i wouldn't need to write this thread):

1) Using an enthusiast-grade XMP-only RAM kit - that is meant for Intel platforms - on an AM5 platform:

Screenshot 2025-03-05 at 12-53-28 DOMINATOR TITANIUM RGB DDR5-Speicher CORSAIR.png


For AM5, it's usually better to get a kit with an EXPO profile (AMD's AM5 alternative profile to Intel's XMP). And right off the bat, you'll see that those stop at DDR5-6400, or for the 2x 24 GB kits, at DDR5-6000. Why? Because on AM5, it's more difficult to run higher DDR speeds, Intel CPUs' memory controllers are better.

2) Combining two such kits and expecting it to work anywhere near its XMP speed. As explained in this thread, several things about the memory system worsen considerably once you use four modules: Not only is it electrically poorer, due to the way the slots are daisy-chained after each other, but it also doubles the stress on the memory controller, so now everything has to run at a slower pace for it to be stable.

3) "More is better" is not adequate for RAM. Instead, you have to determine how much RAM you really need, the process of which i have described in this post. 2x 24 GB is more than plenty for everything a normal user could encounter in daily use, including all gaming. Games are far less RAM-demanding than people think. With 48 GB total, you'd be good for a decade or so, in all probability.

If you have professional workloads which require up to 96 GB of RAM, then it's far better to swap the 2x 24 GB kit for a 2x 48 GB EXPO kit. That should run at its EXPO speed no problem, even though that speed will not be DDR5-7200. But the performance increase from DDR5-6000 to DDR5-7200 is much less than the difference in numbers might suggest. Already at DDR5-6000, you get the vast majority of performance from DDR5.

In summary, i would abandon any efforts to get this close to DDR5-7200. If you really need this much RAM, i''d return/sell the old kits and get one kit of 2x 48 GB, it will run much better and be easier on the memory system.
 
Hi all

Wow, amazing and very informative post, thanks a lot!

I'd explain my case though. I bought a MSI MAG X670E Tomahawk WiFi paired with a Ryzen 9 7950X, in order to replace the PC that I'm using in my homelab as a virtual machine host running Ubuntu 22.04. So, I checked the MB specs and it said "4x DDR5, Maximum Memory Capacity 256GB" and "2DPC 2R Max speed up to 5400+ MHz". The CPU specs allow for 128GB, so there I go.

I also went to the memory compatibility chart and found out that the Corsair 32GB 4800/5600Mhz CMK64GX5M2B5600Z40 was supported, so I bought 4 sticks (2 kits of 2), totaling 128GB, which was perfect for a 32 thread processor. I honestly didn't see or understood the last column there that intuitively but not explicitly seems to say that only a 1 or 2 sticks configuration is supported.

I was planning to run the memory modules at 4800MHz, as I need stability before speed. The Debian 12 system was randomly hanging up, I fully passed the memtest86 test and it was still hanging up with no load at all after a random amount of time, sometimes 6-8hrs, sometimes the next day. And in the system logs it showed up some errors about accessing invalid data areas.

On desperation, I removed two sticks, and later upgraded to Debian 13, and it seems to work stable by now. But I don't really know if it fixed itself because of the OS upgrade or because of using only 2 sticks.

I have configured a bootable disk with Windows, so I can use all the stress and testing tools listed here.

Would you trust those extensive tests to bring back the 4 DIMMs? What would your strategy be here?

Also, I don't know why MSI is advertising a max capacity of 256GB if there is nowhere in their memory supported list any configuration that even reaches 128GB.

Thanks!
 
About the QVL, a few things have to be taken into consideration. First, if they are testing a kit of two modules such as the CMK64GX5M2B5600Z40, they don't tend to combine two such kits. They test the one kit and that's it. After all, they don't really want the user to combine two kits either, because they know how badly this can go (especially on AM5), and the XMP/EXPO is out the window anyway if you combine two kits. This doesn't necessarily mean that combining two such kits is completely out of the question, it's just that they perhaps don't particularly want to test or encourage it, plus the results might say more about their indivual CPU's IMC than about what you can generally expect with any CPU (as with all their QVL results). So, the "checkmark" missing for 4DIMM does not mean they tested it and it didn't work, in most cases it means they didn't even test it.

So where they will test with four modules is obviously with four-module kits, or when they're using four single modules (not from a kit). Of course, there are at least five times as many XMP kits overall with 4x 16 GB than there are with 4x 32 GB (even though 4x 16 GB is usually worse than 2x 32 GB). And if we're looking at 4x 32 GB EXPO kits meant especially for AM5, then there is... nothing, apart from registered modules (with ECC etc.)!

So the RAM makers are obviously also acutely aware of how difficult it is to run 4x 32 GB on AM5. Instead, what you get there are a couple of XMP-only kits from Kingston and Corsair, stopping at DDR5-5600, which they advertise for Intel platforms.

The jump from two to four such modules is vast. As mentioned in the first post, it makes things electrically worse, and it doubles the stress on the CPU's memory controller, which on AM5 can really struggle with this configuration. Also, the Ryzen 7000-series will probably do worse here than the 9000-series where they could tweak the IMC a bit more by now.

The 256 GB capacity, that's uncharted territory, they have it running in their lab no doubt, but with a hand-picked 9000-series with a great IMC quality. You'll probably find screenshots from MSI where they ran this, but it won't be easy to achieve something similar at home, when you can't cherry-pick your CPU for IMC quality. Just like you now have to make the best out of your configuration as well.

Coming to the testing. Memtest86, as mentioned, it gives you a good idea about basic stability, it can pick up defective modules, but it's not that in-depth when it comes to stability in the "grey zone". For that, i trust TestMem5 more, with the 1usmus_v3 profile. I think by default it will run for 90 minutes, and once your configuration survives that (perhaps with a little tweaking of the DDR speed), you can be fairly certain that it's decently stable. And since you're running Linux, why not give GSAT a try.
 
Back
Top