RAM explained: Why two modules are better than four / single vs. dual-rank / stability testing

citay

Pro
SERGEANT
Joined
Oct 12, 2016
Messages
22,913
Since some people run into problems with four RAM modules on modern MSI mainboards, i wanted to explain the reasons behind that, and why two modules are often superior. The main reason lies in the way the memory slots are connected to the memory controller, which is inside the CPU. So the first explanation is about:


1) RAM slot layout

All regular mainboards and desktop CPU models have a dual-channel memory system. Since a lot of boards offer four RAM slots, a pair of two slots have to each form a RAM channel. So the four RAM slots are not individually addressed, but in pairs, as two channels. The different ways to connect the RAM slot pairs on the board are either "Daisy chain" or "T-Topology". This RAM slot layout decision - the way the slots are connected - has a big influence on how many modules (two or four) the board works best with.

Here is a slide from an MSI presentation, showing that almost all of today's boards have a "daisy chain" memory slot layout. This layout heavily prefers two-module-operation. The presentation is a bit older, but it's safe to say that the the vast majority of recent mainboards (for AMD and Intel) also have a daisy chain layout, and it's confirmed in several reviews. Especially MSI are known to use this layout on almost all their modern boards. For other mainboard makers, it depends on the board model, but they will also tend to prefer this layout.

Daisy Chain.jpg


Daisy chain means that the slot pairs are connected one after the other, and therefore optimized for two modules total. The right slot of each channel is the end point.
Using two RAM modules, they are to be inserted into slot 2 and 4 counted from the left as per the mainboard manual. Meaning, into the second slot of each channel and thus the end point. The reason is, this puts them at the very end of the PCB traces coming from the CPU, which is important for the electrical properties.
PCB (printed circuit board) traces are the thin signal lines that are visible on the mainboard, especially between the CPU and the RAM slots.

memory-layout.gif


Why is this important? The PCB traces, going from the memory controller contacts of the CPU, to each contact of the RAM slots, are optimized to result in exactly the same distance between all those points. They are essentially "zig-zagging" across the board for an electrically ideal layout, making a few extra turns if a direct line would lead to an uneven distance.

This is done so that, with two modules, a) each RAM module is at the very end of the electrical traces coming from the CPU's memory controller, and b) each module has exactly the same distance to the memory controller across all contacts. We are dealing with nanosecond-exact timings, so all this matters.

On a mainboard with a daisy-chain RAM slot layout, this optimization is done with only two modules in mind, which are in slot 2 and 4 (on the board, those slots are called A2 and B2). This is the configuration that most buyers would use, and it also results in the best overclocking potential. This way, the mainboard makers can boast with higher RAM overclocking frequencies when advertising the board, and the majority of buyers will have the ideal solution with two RAM modules.

Note: Never populate slots 1 and 3 first. When putting the modules into slot 1 and 3, the empty slots 2 and 4 would be similar to having some loose wires hanging from the end of each RAM contact, creating unwanted signal reflections and so on. So with two modules, they always need to go into the second slot of each memory channel (slot 2+4 aka A2 and B2), to not have "loose ends" after each RAM module.

Slots.png


Now the interesting question. What happens when we populate all four slots on a mainboard with a daisy-chain slot layout? Well, the module in the second and fourth slot become "daisy-chained" after the modules in the first and third slot. This completely worsens the electrical properties of the whole memory system.

With four modules, there are now two modules per channel, and the two pairs of a channel don't have the same distance from the memory controller anymore. That's because the PCB traces go to the first slot, and then over to the second slot. This daisy-chaining - with the signal lines going to the first and then to the second module of a memory channel - introduces a lot of unwanted electrical handicaps when using four modules. The signal quality worsens considerably in this case.

Only with a "T-Topology" slot layout, the PCB traces have exactly the same length across all four slots, which would provide much better properties for four-module operation. But mainboards with T-Topology have gone a bit out of fashion, since most people use just two modules. Plus the memory OC numbers look much better with a daisy chain layout and two modules. So if the mainboard makers were to use T-topology on a board, they couldn't advertise with such high overclocking numbers, and people would think the board is worse (and it actually would be, for only two modules).

topology2.jpg
Example of an ASUS board with the rare T-Topology layout, advertising the fact that it works better with four modules compared to the much more common boards using the daisy-chain layout.


2) Single-rank vs. dual-rank

Another consideration is single-rank vs. dual-rank modules. This is about how a RAM module is organized, meaning, how the individual memory chips on the module are addressed. To put it simply, with DDR4, most (if not all) 8 GB modules are single-rank nowadays, as well as a bunch of 16 GB modules. There's also some 16 GB DDR4 modules that are dual-rank, and all bigger modules are always dual-rank. With DDR5, the 16 GB and 24 GB modules are single-rank, and the 32 GB and 48 GB modules are dual-rank. We'll come to the implications of this soon.

The capacity at which the modules start to be organized as dual-rank slowly shifts upwards as the technology advances. For example, in the early days of DDR4, there were a bunch of dual-rank 8 GB modules, but with the modern RAM kits, those modules will be single-rank by now. Even the dual-rank 16 GB modules became less prominent with DDR4 as it developed further. With DDR5, the 8 GB modules are 100% single-rank from the start, the 16 and 24 GB modules are almost certainly single-rank. Above that, it's dual-rank organization. Now, why is this important?

It has implications for the DDR speed that can be reached. The main reason is, a single-rank module puts less stress on the memory system. Dual-rank is slightly faster performance-wise (up to 4%), but also loads the memory controller more. One dual-rank module puts almost as much stress on the memory system as two single-rank modules! This can become an important factor once the DDR speed approaches certain limits.

What is the memory system? It consists of the CPU's integrated memory controller (IMC), the mainboard and its BIOS, and the RAM itself.
So the following factors all affect if the RAM can actually run at a certain setting:

- The mainboard (chipset, component/PCB quality etc.).
- The mainboard's BIOS memory support and the BIOS settings.
- The CPU's integrated memory controller (IMC), quality depends on the CPU generation as well as on the individual CPU (silicon lottery).
- The properties of the RAM modules.

Every modern mainboard will be the happiest with two single-rank modules (for dual-channel operation), because this causes the least stress on the memory system, and is electrically the most ideal, considering that the memory slots are connected as "daisy chain". This fact is reflected in the maximum DDR frequencies that the mainboards are advertised with.

Let's look at DDR4 first. Here is an example from the highest MSI DDR4 board model using Intel Z690 chipset.
Specifications of MPG Z690 EDGE WIFI DDR4, under "Detail".
1DPC 1R Max speed up to 5333+ MHz
1DPC 2R Max speed up to 4800+ MHz
2DPC 1R Max speed up to 4400+ MHz
2DPC 2R Max speed up to 4000+ MHz

"DPC" means DIMM (=module) per channel, 1R means single-rank, 2R means dual-rank.

With 1DPC 1R = two single-rank modules (so, 2x 8 GB or 2x 16 GB single-rank), the highest frequencies can be reached.
With 1DPC 2R = two dual-rank modules (like 2x 16 GB dual-rank or 2x 32 GB), the maximum attainable frequency is lower, since the memory system is under more stress.
With 2DPC 1R = four single-rank modules (4x 8 GB or 4x 16 GB single-rank), the maximum frequency drops again, because four modules are even more challenging than two dual-rank modules.
And 2DPC 2R = four dual-rank modules (like 4x 16 GB dual-rank or 4x 32 GB) combines the downsides of the highest possible load on the memory controller with the electrical handicap of using four slots on a daisy-chain-mainboard.

The last configuration can already be difficult to get stable at DDR4-3200 sometimes, let alone DDR4-3600. One could consider themselves lucky to get DDR4-3600 working with four dual-rank modules, maybe having to use more relaxed timings for example. The 16 GB and 32 GB modules also often don't have particularly tight XMP timings to begin with, like DDR4-3600 18-22-22-42.
By the way: The second timing (tRCD) is more telling and important than the first one (tCL) to determine the module quality, but most people only look at tCL = CAS Latency.


With the new DDR5 standard, this drop in attainable frequency is even more pronounced. From the initial specs of one of the top MSI Z690 boards:
Specifications of MEG Z690 ACE, under "Detail".
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 5600+ MHz
2DPC 1R Max speed up to 4000+ MHz
2DPC 2R Max speed up to 4000+ MHz

When going from two modules (1DPC) to four modules (2DPC), the attainable frequency drops drastically. With two single-rank modules (up to 16 GB per module), DDR5-6000 and above is possible according to MSI. With two dual-rank modules (for example 2x 32 GB), that goes down a little already. But with four modules, the memory system is under a lot more stress, and MSI are quite open about the result. This seems to be a limitation of the DDR5 memory system, which relies even more on a very clean signal quality. Using four DDR5 modules on a board with a daisy-chain layout clearly is not good in that regard.
This deterioration with four DDR5 modules is so drastic that the conclusion could be: DDR5 motherboards should come with only 2 dimm slots as standard (Youtube)

Now, with the 13th gen "Raptor Lake" Intel CPUs being available (13600K and up) which come with an improved memory controller, as well as newer BIOS versions containing some memory code optimizations, MSI have revised the frequency numbers for the boards a bit. Again looking at the Z690 ACE, the revised numbers are:
1DPC 1R Max speed up to 6666+ MHz
1DPC 2R Max speed up to 6000+ MHz
2DPC 1R Max speed up to 6000+ MHz
2DPC 2R Max speed up to 5600+ MHz

However, such specs are usually what their in-house RAM overclockers have achieved with hand-picked modules and custom RAM settings. And like many people have shared here on the forum before, it's not like you can drop in some DDR5-7200 or -7600 and expect it to just work, not even with the most high-end Z790 board and 13th gen CPU. Those aren't "plug & play" speeds, those high-end RAM kits are something that enthusiasts buy to have the best potential from the RAM (meaning, a highly binned kit), and then do a back and forth of fine-tuning in the BIOS and stress-testing to get it to where they want it. I have explained this more thoroughly in this post.

And this example is only for Intel DDR5 boards. They had about a one year head start compared to AM5. What we're seeing on AM5 is, once people try to use four large DDR5 modules, they can consider themselves lucky if the can still get into the DDR5-5xxx range. Sometimes there's even problems getting it to boot properly, sometimes it will be stuck at low speeds and get unstable at anything even close to XMP speeds.

The main takeaway from all this for DDR5:

Whatever total RAM size needed, it's better to reach it with two modules if decent speed/performance is required. Combining two kits of two high-speed modules each simply has a low likelihood of working. As mentioned, with four modules, especially dual-rank ones like 32 GB modules, the maximum frequency that the memory system can reach drops down considerably, which makes XMP/EXPO speeds not work anymore. There's a reason that there are not that many four-module kits available, and they are usually a more conservative speed. With DDR5 it's always better to use two modules only (even with DDR4 that is advised, but four modules can at least work quite decently there).

This also means that DDR4 is actually better for high-capacity memory configurations such as 128 GB total, because:
- It doesn't experience such a large drop in the electrical properties of the memory system when using four modules
- Four-module high-capacity kits are readily available (and at a lower price)
- Four-module kits are actually certified on the memory QVL at MSI
- They will most likely outperform their DDR5 equivalent due to DDR4's lower latencies, when compared to DDR5's necessary low required frequencies at this configuration.
The overall higher DDR5 latencies just can't be compensated for by higher RAM frequencies anymore, since using four DDR5 modules requires lower frequencies to be stable.
See also RAM performance scaling.

Of course, on AM5 there is no option to go DDR4, it's DDR5 only. And eventually, even Intel will move to DDR5 only. So, either make do with two modules and have the RAM still run at nice speeds, or use four modules in the knowledge that there might be issues and the RAM speed will end up being lower. XMP speed might not be stable, so the "DRAM Frequency" setting might have to be lowered manually from XMP for it to work.

Generally, in case of RAM problems, no matter the technology, there are three possibilities, which can also be used in combination:
- Lower the frequency
- Loosen the timings
- Raise the voltage(s)

But in some cases, buying different RAM might be the best solution.


3) Amount of RAM

For a decent system up to mid-range, 16 GB (as 2x 8 GB) has been the norm for a long time, for good reason. Now, with DDR5, 32 GB (as 2x 16 GB) are slowly becoming the amount that a lot of people go for, at least for nice mid-range systems upwards. While 16 GB are actually still enough even for the most recent games, the system will be a bit more future-proof with 32 GB total. Anything beyond that, however, is useless for gaming, it only tends to make it worse.

Why is that? Games don't really need more than 16 GB. A lot of games are developed with the lucrative console market in mind, and even the PlayStation 5 only has 16 GB of RAM. So games are designed from the ground up not to need more RAM, which then also applies to the PC versions of those games. There are only very few games who can use more than 16 GB RAM, and it doesn't even make them run a lot faster. But i don't know a single game that will use more than 32 GB RAM, they are not even anywhere near that. So even for a high-end gaming system, i would never use more than 32 GB total, when no game can use it anyway (and that's not about to change either). The 2x 8 GB (mostly DDR4) / 2x 16 GB kits always cause the least trouble and run the fastest, that's why one of those is the best choice for a gaming PC.

64 GB RAM or more can be justified for large video editing projects, rendering, heavy photoshop use, running lots of VMs and such cases. However, 64 GB amounts to a waste of money for gaming, no matter what. Before any game will ever touch more than 32 GB, the whole PC will be long outdated, because it will take many years. Right now, most games restrict themselves to 16 GB maximum, because so many potential buyers out there have 16 GB RAM in their system. The next step would be for games to use up to 32 GB, but we're not even there yet. So no system that is put together primarily for gaming should use more than a kit of 2x 16 GB RAM.

We could just be like, ok, the money for that 64 GB RAM (or more) would be wasted because it doesn't have any benefits for gaming, but "more is better", so let the people use more RAM for their nice gaming system. However, when using large 32 GB RAM modules and/or four memory modules, it not only has no benefits, it also has a negative impact on the memory system. The bigger modules usually tend to run slower, and these configurations will also cause more stress for the memory system, increasing the likelihood of problems. So for gaming, i would never choose a configuration which can only cause problems for the memory system, but doesn't provide any benefit from that much RAM being available.


Recommendations for use on modern consumer mainboards:
8 GB RAM: Use 2x 4 GB, or even 1x 8 GB if RAM performance isn't critical anyway - this is ok for entry-level systems, office work etc.
16 GB RAM: Use 2x 8 GB - for up to mid-range (gaming) systems
32 GB RAM: Use 2x 16 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
48 GB RAM (DDR5 only): Use 2x 24 GB - for nice mid-range to high-end gaming systems (when all other bottlenecks are removed) and semi-pro uses beyond gaming
64 GB RAM: Use 2x 32 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
96 GB RAM (DDR5 only): Use 2x 48 GB - purely "beyond gaming" - only necessary for professional use - preferable over any four-module configuration
128 GB RAM total : Use 4x 32 GB - purely "beyond gaming" - only necessary for professional use
256 GB RAM total : Use 4x 64 GB - purely "beyond gaming" - only necessary for professional use

These last two configurations - using four dual-rank high-capacity modules - are maximally stressing the memory system, so they will probably be restricted to something like DDR4-3200 or lower, or DDR5-5200 or lower respectively. Any higher speeds might not run reliably.

The new DDR5-only option of 2x 24 GB is quite similar to 2x 16 GB, since the 24 GB modules should still be single-rank, basically making them as easy to run as the 16 GB modules. And thus preferable to the 32 GB modules, which are definitely dual-rank and put a higher stress on the memory system.

Also, for 128 GB total, i recommend DDR4, not DDR5. DDR5 really doesn't run well with 4x 32 GB, it would be restricted to quite low frequencies, pretty much negating the DDR5 advantage. With DDR5 RAM, i would actually never recommend using four modules, not even 4x 8 GB (the 8 GB modules are slower and 2x 16 GB work better).

As for the XMP speed: For all the DDR4 configurations up to 64 GB total, i usually recommend DDR4-3600 speed (see chapter 4). For DDR5, the sweet spot would probably be DDR5-6000. Above that, it can gradually become more challenging to stabilize. Around the high DDR5-6xxx range or even into DDR5-7xxx, it's something for enthusiasts who know what they're doing, that's not a "plug & play" speed anymore (especially with AM5), and experience is required to make it work.



3b) How to increase the RAM size when you have 2x 4 GB or 2x 8 GB RAM?

First choice: Replace the 2x 4 GB with 2x 8 GB, or the 2x 8 GB with 2x16 GB. The new RAM should be a kit of matched modules. This will ensure the best performance and the least problems, because there's only two modules again in the end.

Second choice: Add a kit of two matching modules to your two existing modules. But you might not be able to get the same modules again. Even if they are the same model, something internally might have changed. Or you might toy with the idea of adding completely different modules (for example, adding 2x 8 GB to your existing 2x 4 GB). This can all cause problems. The least problems can be expected when you add two modules that are identical to your old ones. But then there's still this: You are now stressing the memory system more with four modules instead of two, so the attainable RAM frequency might drop a little. Also, it's electrically worse on a mainboard with daisy-chain layout, as explained under 1).

Lastly, adding just one more module (to have three modules total) is by far the worst choice for several reasons. Every desktop platform has a dual-channel memory setup. This means it works best with two modules, and it can work decently with four modules. And if you only use the PC for light office work, even a single 4GB or a single 8GB module would do. But in a PC where performance matters, for example for gaming, getting a single RAM module to upgrade when you have two existing modules is not good at all. The third module will be addressed in single-channel mode, while simultaneously ruining the memory system's electrical properties and making everything work at whatever the slowest module's specification is.

Note: When upgrading the RAM, it's always good to check for BIOS updates, they often improve compatibility with newer RAM modules (even if it's not explicitly mentioned in the changelog).


4) DDR4 only: Today's sweet spot of DDR4-3600 with the latest CPUs

On AMD AM4, DDR4-3600 has been the sweet spot for quite a while. But Intel introduced new memory controllers in their 11th gen and 12th gen CPUs which also require a divider above a certain RAM frequency. Only up to DDR4-3600 (but that pretty much guaranteed), the RAM and the CPU's memory controller (IMC) run at the same frequency (Intel calls this "Gear1 mode", on AMD AM4 it's "UCLK DIV1 Mode" on "UCLK==MEMCLK“, generally this can be called "1:1 mode"). Somewhere above DDR4-3600, depending on the IMC's capabilities, the IMC has to run on a divider for it all to work (which would be 1:2 mode), which makes it run at half the RAM frequency. This costs a lot of performance.

An example on Intel Z590 with a kit of DDR4-3200: The IMC doesn't require a divider and can comfortably run in 1:1 mode (Gear1), which has the best performance.

BIOS OC.png


The Gear2 mode that becomes necessary at high RAM frequencies has a substantial performance penalty, because the latencies increase (everything takes a little longer). This basically leads to the same situation that we already know from AMD AM4: RAM frequencies that are considerably above DDR4-3600 are almost useless, because of the divider being introduced for the IMC (memory controller). The performance loss with a divider is just too significant.

For the RAM performance to be on the same level again as DDR4-3600 without a divider (1:1 mode), it requires something like DDR4-4400 (!) with the divider in place (1:2 mode).

Looking at the high prices for DDR4-4400 kits, or what it takes to overclock a normal kit of RAM to that, it's not practical. So with Intel 11th- to 14th-gen CPUs on DDR4 boards, and of course AMD AM4 CPUs, the "sweet spot" is usually at DDR4-3600. This frequency is known to not require a divider for the memory controller and thus gives the best performance and bang-for-buck.

Some of the more recent CPU models can sometimes go a bit above DDR4-3600 without requiring a divider for the memory controller. But DDR4-3600 almost always runs well in 1:1 mode and has a better price/performance than RAM with higher specs, so it's still the top pick.

Here's an example of an AMD system (X570 with Ryzen 3900X). The tool HWinfo64 can show those frequencies in the "Sensors" window.
DDR4-3866 is too much to run in 1:1 mode, so the divider for the memory controller is active and performance is worse.
DDR4-3600 manages to run in 1:1 mode and the performance is better.

divider.png


The best thing on both platforms nowadays is to run DDR4-3600 without a divider and with some nice low timings if possible. Something like DDR4-4000 will usually make the BIOS enable the divider for the memory controller and it will be slower overall than DDR4-3600, despite the higher RAM frequency. This is because the latencies are effectively increased when the memory controller has to work at a lower frequency. With a DDR4-4000 kit of RAM for example, i would enable XMP, but then manually set a DRAM frequency of DDR4-3600. This should make the BIOS remove the divider for the memory controller and the performance will immediately be better.

Here's a page from an MSI presentation about 11th gen Rocket Lake CPUs, showing the increased latencies when the divider comes into play:
Gear1.jpg

And here's from an AMD presentation about the Ryzen 3000-series, showing similarly increased latencies once the divider is active:
AMD latencies.png


With the higher DDR5 speeds, a divider is practically always used, because it's not feasible to run the memory controller at the same speed anymore. But with DDR5, the divider for the memory controller has less of a penalty than with DDR4, because DDR5 can access a module via two seperate sub-channels of 2x 32 bits (instead of one 64 bit channel like on DDR4). This allows for higher/better interleaving of memory accesses on DDR5 and alleviates most of the latency penalties. On AMD the FCLK can be left at 2000 MHz with DDR5, it seems to be the new "sweet spot".


5) RAM stability testing

Memtest86 Free
from https://www.memtest86.com/
I use this as a basic stability test on a new system before i update the BIOS to the newest version (which is always one of the first things to do, as the factory BIOS will already be quite outdated). Also, since it runs from a USB stick/drive, i use it as a first check before booting Windows, when something has significantly changed with the RAM or its settings. One or two passes of this give me a good idea if the system is generally stable enough to start installing Windows (or boot it).

It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing. The main advantage is that it runs from USB. The main disadvantage is that RAM tests in Windows are more thorough in catching errors.
Launch the included ImageUSB program to prepare a USB drive with it, then boot from that drive (press F11 during POST for the boot menu).
The row hammer tests at the end, which test for a purely theoretical vulnerability and take a long time, can be skipped.


Once in Windows, a quick way for detecting RAM instabilities is TestMem5 or TM5 for short: https://github.com/CoolCmd/TestMem5
TM5 delivers a good and relatively quick indication of RAM stability. Run as admin. I like to run it with the "1usmus_v3" configuration which can be selected under Settings, because it reliably detects instability for me. A full run takes 90 minutes, but if there's instability, it should detect errors much earlier than that, i found.
This is my go-to RAM test in Windows, because it is pretty reliable at revealing RAM errors when things are not 100% stable yet.

Example of unstable RAM (found after three minutes already):

Screenshot.png


Any errors are not acceptable, meaning, something about the RAM configuration has to be changed so it passes without errors.
This example screenshot is not from me, you see they used the "Universal 2" configuration, i prefer the "1usmus_v3" one as mentioned.

Now, aimed just with these two tools (Memtest86 for a basic stability test before even installing/booting Windows, and TM5 for more thorough testing in Windows), you should be able to detect most instability just fine. Therefore, the following tools are more for when you are really serious about RAM testing, for example if you manually tune all the timings and just want to test it in every way possible.


To more thoroughly test RAM stability, there is a test from Google, and it's called GSAT (Google stressapptest). It has been specifically developed by Google to detect memory errors, because they use ordinary PCs instead of specialized servers for a lot of things. The only downside is, it takes a bit of time to set up. To run GSAT, you first have to enable the "Windows Subsystem for Linux":

0*N8OWBM7IUXaCsH7C.jpg


After the necessary reboot, open the Microsoft Store app and install "Ubuntu", then run Ubuntu from the start menu.
It will ask for a username and password, they are not important, just enter a short password that you remember, you need to enter it for the update commands.
Then run the following commands one after the other (copy each line, then right-click into the Ubuntu window to paste it, then press enter):

sudo apt-get update
sudo apt full-upgrade -y
sudo apt-get install stressapptest

Then you can start GSAT with the command:
stressapptest -W -M 12000 -s 3600

This example tests 12 GB of RAM (in case of 16 GB total, because you need to leave some for Windows), for 3600 seconds (one hour). You can also enter -s 7200 for two hours.
If you have more RAM, always leave 4 GB for Windows, so with 32 GB, you would use "-M 28000".
GSAT looks unspectacular, just some text scrolling through, but don't let that fool you, that tool is pretty stressful on your RAM (as it should be).
At the end, it has to say Status: PASS, and there should be no so-called "hardware incidents". Otherwise it's not stable.


Then, HCI Memtest is quite good. There is a useful tool for it, called MemTestHelper: https://github.com/integralfx/MemTestHelper/releases/tag/v2.2.0
It requires Memtest 6.4, which can be downloaded here: https://www.3dfxzone.it/programs/?objid=18508
(Because in the newest Memtest 7.0, they made a change so that MemTestHelper doesn't work anymore and you should be forced to buy Memtest Pro).

Put both tools in the same folder. Start MemTestHelper, and with 16 GB RAM, you can test up to 12000 MB (the rest is for Windows).
Let it run until 400% are passed. That's a good indicator that your RAM is stable. If you want to make really sure, let it run to 800%.

memtest_1.png


Another popular tool among serious RAM overclockers is Karhu from https://www.karhusoftware.com/ramtest/
But it costs 10€ to register, so i would just use the other free programs (unless RAM OC is your hobby).


A stability test which also challenges the memory controller a lot, and therefore definitely useful to round out the RAM-related testing:
Linpack Xtreme from https://www.techpowerup.com/download/linpack-xtreme/

Run Linpack, select 2 (Stress test), 5 (10 GB), set at least 10 times/trials, press Y to use all threads, 2x N, and let it do its thing.
It's one of the best tools to detect instability, but warning, this also generates a lot of heat in the CPU. So i would watch the temperatures using HWinfo64 Sensors.
Each trial has to say "pass", and it has to say "checks passed" at the end.

linpack.png


It also puts out a "GFlops" number, that one is actually a decent performance metric to quickly judge if a certain RAM tuning (lowering timings) has performance benefits.



An important note about RAM and heat: Higher ambient temperatures are not good for RAM stability. The RAM might be perfectly stable in a RAM-specific stress test, but depending on the graphics card (its power consumption and cooling design), once that dumps its heat into the case very close to the RAM slots during gaming, there can be RAM-related crashes. Simple because it heats up the RAM a lot and makes it lose stability.

So to be absolutely sure that the RAM is stable even when it's hot, it can be good to run something like FurMark alongside the RAM stability test. Not for hours, because FurMark creates extreme GPU load, but just for 20 minutes or so, to really heat things up. A lot of times, the fins of the cooler are oriented towards the mainboard and the side panel, so the heat comes out from the sides of the card, and the RAM sits right above that.

If your RAM is fine in isolated RAM stress tests, but you have crashes in games (or when otherwise loading the GPU) with the same RAM settings, then you need to loosen up those settings a bit to add more headroom for those circumstances. Go by the three principles of RAM instability: Loosen timings and/or lower frequency and/or raise voltage.



Deep-diving a bit more into RAM:
It can quickly become a bit complicated, but if there are any questions, feel free to ask.


My other guides:
Guide: How to find a good PSU
Guide: How to set up a fan curve in the BIOS


Someone asked me if they can thank me for my work by sending me something via Paypal: Yes, that's possible, just write me a message and i'll tell you my Paypal 😉
 
Last edited:
@Waldorf About Memtest86, i don't recommend is as the final word on stability, i say it's a "good first test if you are completely unsure about stability". Apart from that, i get feedback from people all the time that it can detect purely settings-based instability before even having to boot Windows. Having said that, you're right, the Windows tools like TM5 etc. are certainly more useful when it comes down to fine-tweaking or when Memtest86 gives the all-clear but there still might be instability.

For the nicer DDR5 memory ICs, SK Hynix A-die is all the rage now, i believe.

and unless i missed something, 4 sticks would be fine, as long as its fast enough to match the systems "sweetspot".

Yes, you're missing something, because with four big dual-rank modules like he's planning to use, said sweet spot becomes much harder to reach (that's if we assume that the sweet spot for 14th gen + Z790 would be something like DDR5-6000~6400):

ace.png


The best solution performance-wise would be to use 2x 48 GB (if 96 GB total are enough for the workloads). This is rated for "up to 6600+ MHz" (always to be taken with a grain of salt, it depends a lot on the individual CPU's IMC as well).

With 4x 32/48GB, having to back down to DDR5-5600 or -5200 seems very likely, in some cases even to -4800 if it turns out to be particularly problematic for some reason. For 4x 48 GB, there's only two kits that can be seriously considered, Corsair CMK192GX5M4B5200C38 or CMH192GX5M4B5200C38 (same kit really, but the latter with RGB). This has actually been reviewed here. But again, a RAM review is only as good as the CPU's IMC that has been used for it.

For 4x 32 GB there is more choice on the market now, and there's no need to shy away from DDR5-5600 there. If it doesn't want to run at that, then just lower the DRAM Frequency setting by hand to -5200 or below. I wouldn't accept anything lower then DDR5-4800 though to make it stable, because that would form too much of a performance bottleneck at that point.
 
@citay
not a good idea to use any memtest(86) with "fixed" running time, e.g. where the suite stops after running every test once.
those are for detecting DEFECTIVE ram, not for tweaking/stability testing.
and as you correctly listed, for that you will need stuff like TM5/HCI, e.g. any ram test that runs "indefinitely" (until shutdown),
as those are the only ones able to detect intermittent errors.
i can pass ever (fixed time) version of memtest on tweaked ram, while triggering errors on the others within 1-20 min.

anytime ram is stable (HCI/TM5), its not the ram when a game crashes, more likely the game not "happy" with something
might be IF/MC, but if ram is stable, its stable, e.g. if i dont have errors running HCI for +2 days, its not the ram causing issues.

@truthntraditio156902db
MEGs are great boards, incl ram tweaking.

i recommend getting something above bare min for corsair, the vengeance series is known to have multiple dies being used for the same model nbr,
so its a crapshoot what die you get, i would say stick with LPX or Dominator when it comes to corsair, or kits with speccs know to be b-die (or equivalent).

and unless i missed something, 4 sticks would be fine, as long as its fast enough to match the systems "sweetspot".
I was originally looking at the dominator line, but they didn't support the amount of RAM I wanted.
*(96 GB DDR5-6000 | 4 x 24GB is the max for dominator line - according to my PC Parts Picker compatibility list).
I hear what you're saying about the "sweet spot" but how would I find that for the system I'm building?
Thanks again for the reply!
T.
 

Attachments

  • 1719595840192.png
    1719595840192.png
    61.8 KB · Views: 60
First i would try to get a better idea what amount of RAM your workloads/workflow would actually require. I mean, we are talking about a 64 GB difference between 128 and 192 GB, which is a huge delta (more than most people have total). If you are that unsure about the requirements, who says that 96 GB might not be enough? Then you can use 2x 48 GB which would be the lesser of all the evils. For 4x 32 GB, the next size up, i would say you should target a DDR5-5600 kit, and then reduce the speed manually if required for stability (easily done in the BIOS). Lastly, with 4x 48 GB kits, the choice is done for you, there's only the two Corsair kits at the top of your picture, basically (and they are identical apart from the RGB).
 
First i would try to get a better idea what amount of RAM your workloads/workflow would actually require. I mean, we are talking about a 64 GB difference between 128 and 192 GB, which is a huge delta (more than most people have total). If you are that unsure about the requirements, who says that 96 GB might not be enough? Then you can use 2x 48 GB which would be the lesser of all the evils. For 4x 32 GB, the next size up, i would say you should target a DDR5-5600 kit, and then reduce the speed manually if required for stability (easily done in the BIOS). Lastly, with 4x 48 GB kits, the choice is done for you, there's only the two Corsair kits at the top of your picture, basically (and they are identical apart from the RGB).
Thank you very much for the speedy reply!
While Adobe will say 16GB minimum & 38GB recommended. When you go to the places who build PCs for pros *(& talk to professionals, they'll mostly say "you need more")*
My rationale for wanting 128GB min is that Adobe products are notorious RAM hogs & I plan on running 2 of these programs at the same time.
This is reinforced by seeing that Puget Systems pre-built AfterEffects specific PC comes with 128GB (with the option to upgrade up to 256GB).

My takeaway from this is that if I want to do what I want to do, I'm going to need to get the 4x32(+) & do some tweaking.
One final question/s, when you said:
4x 32 GB, the next size up, i would say you should target a DDR5-5600 kit, and then reduce the speed manually if required for stability (easily done in the BIOS).
How would I know if it was "required for stability"? Is it likely to be unstable out of the box? Will this be obvious, or is there something I should be doing to test/check before putting the PC through it's paces. *(I do plan on doing the standard benchmarking after it's built, but again, I'm far from being even an expert on PC builds.)*

Thanks again for your time.
T.
 
My rationale for wanting 128GB min is that Adobe products are notorious RAM hogs & I plan on running 2 of these programs at the same time.
This is reinforced by seeing that Puget Systems pre-built AfterEffects specific PC comes with 128GB (with the option to upgrade up to 256GB).

Their 128 GB configuration is probably because they can get the 4x 32 GB kits the cheapest and it's the most common for such a professional PC. And if you look closely at the 256 GB configuration, that's an entirely different system altogether, using a workstation board with a workstation CPU to be able to use RDIMM (Registered DIMM), which you can't use on a normal desktop board. And the price for the system doubles.

This does not suggest that the most recently available option of 2x 48 GB might not be enough. If they say 38 GB is recommended, and you have 2x 48 GB for two of these programs, it seems to be worth a try to me. Worst case, you have to return it because you see that it needs even more RAM. But i would probably give it a shot.

Then you would check in the task manager, in the memory tab, the "in use" and "committed" figures.

xpg_spectrix_d50_ddr4_rgb-6.png


"In use" is the physical RAM being used by all processes that are currently running. With "Committed", the left number is "commit charge", what all processes together could maximally use if they need to use all their virtually allocated RAM for some reason. The right number is "commit limit", that's not relevant now. The main one is obviously "In use". If that one doesn't get near to 96 GB with your workloads, you probably don't need more RAM. If "Committed" (the left number of it) goes above 96 GB, it's a hint that having more RAM might help in a worst-case scenario.

How would I know if it was "required for stability"? Is it likely to be unstable out of the box? Will this be obvious, or is there something I should be doing to test/check before putting the PC through it's paces. *(I do plan on doing the standard benchmarking after it's built, but again, I'm far from being even an expert on PC builds.)*

You have probably not seen my post above your last one, but yes, using four high-capacity (and therefore dual-rank) modules is putting quite some strain on the memory system, to the point where even MSI in their "marketing specs" only say "up to 5600+ MHz" (which isn't really MHz, it's MegaTransfers/s aka MT/s, but ok). So DDR5-5600 with 4x32 GB might not be stable right out of the box. You need to run stability tests, such as what i list under 5) in the first post of this thread. Stability tests differ from benchmarks in that stability tests verify the calculation results and check them for errors, and notify you if there are any. A benchmark will just run, or with severe instability it will crash. It will not detect more subtle instability.
 
Their 128 GB configuration is probably because they can get the 4x 32 GB kits the cheapest and it's the most common for such a professional PC. And if you look closely at the 256 GB configuration, that's an entirely different system altogether, using a workstation board with a workstation CPU to be able to use RDIMM (Registered DIMM), which you can't use on a normal desktop board. And the price for the system doubles.

This does not suggest that the most recently available option of 2x 48 GB might not be enough. If they say 38 GB is recommended, and you have 2x 48 GB for two of these programs, it seems to be worth a try to me. Worst case, you have to return it because you see that it needs even more RAM. But i would probably give it a shot.

Then you would check in the task manager, in the memory tab, the "in use" and "committed" figures.

xpg_spectrix_d50_ddr4_rgb-6.png


"In use" is the physical RAM being used by all processes that are currently running. With "Committed", the left number is "commit charge", what all processes together could maximally use if they need to use all their virtually allocated RAM for some reason. The right number is "commit limit", that's not relevant now. The main one is obviously "In use". If that one doesn't get near to 96 GB with your workloads, you probably don't need more RAM. If "Committed" (the left number of it) goes above 96 GB, it's a hint that having more RAM might help in a worst-case scenario.



You have probably not seen my post above your last one, but yes, using four high-capacity (and therefore dual-rank) modules is putting quite some strain on the memory system, to the point where even MSI in their "marketing specs" only say "up to 5600+ MHz" (which isn't really MHz, it's MegaTransfers/s aka MT/s, but ok). So DDR5-5600 with 4x32 GB might not be stable right out of the box. You need to run stability tests, such as what i list under 5) in the first post of this thread. Stability tests differ from benchmarks in that stability tests verify the calculation results and check them for errors, and notify you if there are any. A benchmark will just run, or with severe instability it will crash. It will not detect more subtle instability.
Thank you again, I greatly appreciate your time & efforts.
I think I've got it narrowed down to the point & can stop researching & start buying.
Here's the build if you're interested - https://pcpartpicker.com/user/Mr_Enigmatic/saved/nd3M3C
 
@Waldorf About Memtest86, i don't recommend is as the final word on stability, i say it's a "good first test if you are completely unsure about stability". Apart from that, i get feedback from people all the time that it can detect purely settings-based instability before even having to boot Windows. Having said that, you're right, the Windows tools like TM5 etc. are certainly more useful when it comes down to fine-tweaking or when Memtest86 gives the all-clear but there still might be instability.

For the nicer DDR5 memory ICs, SK Hynix A-die is all the rage now, i believe.



Yes, you're missing something, because with four big dual-rank modules like he's planning to use, said sweet spot becomes much harder to reach (that's if we assume that the sweet spot for 14th gen + Z790 would be something like DDR5-6000~6400):

View attachment 189759

The best solution performance-wise would be to use 2x 48 GB (if 96 GB total are enough for the workloads). This is rated for "up to 6600+ MHz" (always to be taken with a grain of salt, it depends a lot on the individual CPU's IMC as well).

With 4x 32/48GB, having to back down to DDR5-5600 or -5200 seems very likely, in some cases even to -4800 if it turns out to be particularly problematic for some reason. For 4x 48 GB, there's only two kits that can be seriously considered, Corsair CMK192GX5M4B5200C38 or CMH192GX5M4B5200C38 (same kit really, but the latter with RGB). This has actually been reviewed here. But again, a RAM review is only as good as the CPU's IMC that has been used for it.

For 4x 32 GB there is more choice on the market now, and there's no need to shy away from DDR5-5600 there. If it doesn't want to run at that, then just lower the DRAM Frequency setting by hand to -5200 or below. I wouldn't accept anything lower then DDR5-4800 though to make it stable, because that would form too much of a performance bottleneck at that point.
Hi, citay! I hope you are OK, seeing you after some period.

Let me be a self-proclaimed arbiter here, for both you and Waldorf have a piece of truth in this misunderstanding. I have always understood Memtest86 is for initial testing for errors, which is clear from the context, but I have also heard it repeated in our discussions, so, no problem for me. However, if you quote your original paragraph correctly, you may see what Waldorf, as I assume, refers to:

"It's a good first test if you are completely unsure about stability, as well as a good "finisher" if you want to be extra sure that everything is ok with your memory system after doing other testing."

Waldorf is a new member, after all, no need to put them down, with their own piece of experience. We will always appreciate your precious knowledge here, citay, if I can speak for others ;)
 
I think I've got it narrowed down to the point & can stop researching & start buying.
Here's the build if you're interested - https://pcpartpicker.com/user/Mr_Enigmatic/saved/nd3M3C

Yeah, try the 2x 48 GB kit first. Liquid Freezer III is a solid choice if you need the processing power of an i9 for your workloads. Not sure about the HDDs/SSDs, why several smaller ones? For example i would get one 990 PRO 4TB, or maybe two 2TB ones at the most (if you want to seperate something), there's no advantage in preferring three smaller ones, the cost is higher too.

BTW, check for firmware updates in Samsung Magician, apply it, reboot, then check that your power plan in Windows is still on "Balanced" (sometimes Magician likes to mess with it for some weird reason).

Waldorf is a new member, after all, no need to put them down, with their own piece of experience. We will always appreciate your precious knowledge here, citay, if I can speak for others ;)

Sorry, that must've come over wrong, i tend to write quite direct and to-the-point replies, which may come across as a bit harsh if i don't add smileys. But they are not meant like that, i don't ever want to put anyone down, it's just the German way to talk like that more or less 😅. I simply wanted to explain something, but no hard feelings about anything at all. If he has a counterargument to make, i am all ears, and i am also willing to amend any statements i may have made which might not be entirely correct.
 
Last edited:
Yeah, try the 2x 48 GB kit first. Liquid Freezer III is a solid choice if you need the processing power of an i9 for your workloads. Not sure about the HDDs/SSDs, why several smaller ones? For example i would get one 990 PRO 4TB, or maybe two 2TB ones at the most (if you want to seperate something), there's no advantage in preferring three smaller ones, the cost is higher too.

BTW, check for firmware updates in Samsung Magician, apply it, reboot, then check that your power plan in Windows is still on "Balanced" (sometimes Magician likes to mess with it for some weird reason).



Sorry, that must've come over wrong, i tend to write quite direct and to-the-point replies, which may come across as a bit harsh if i don't add smileys. But they are not meant like that, i don't ever want to put anyone down, it's just the German way to talk like that more or less 😅. I simply wanted to explain something, but no hard feelings about anything at all. If he has a counterargument to make, i am all ears, and i am also willing to amend any statements i may have made which might not be entirely correct.
It is all right, citay, thank you for your kind words. I just wanted to point out that little discrepancy. Waldorf is a bit boastful themselves, they (those pronouns are killing me, LOL) will not fall apart for that either, I believe. Good to see you are in a good mood yourself. Your precision is well known here. I myself looked up the phrase "put down" in a dictionary before I used it and I saw it offers a pallette of nuances, from simple "criticise" to a bit more negative connotations. I meant it in the least aggresive way. :D
 
Yeah, try the 2x 48 GB kit first. Liquid Freezer III is a solid choice if you need the processing power of an i9 for your workloads. Not sure about the HDDs/SSDs, why several smaller ones? For example i would get one 990 PRO 4TB, or maybe two 2TB ones at the most (if you want to seperate something), there's no advantage in preferring three smaller ones, the cost is higher too.

BTW, check for firmware updates in Samsung Magician, apply it, reboot, then check that your power plan in Windows is still on "Balanced" (sometimes Magician likes to mess with it for some weird reason).



Sorry, that must've come over wrong, i tend to write quite direct and to-the-point replies, which may come across as a bit harsh if i don't add smileys. But they are not meant like that, i don't ever want to put anyone down, it's just the German way to talk like that more or less 😅. I simply wanted to explain something, but no hard feelings about anything at all. If he has a counterargument to make, i am all ears, and i am also willing to amend any statements i may have made which might not be entirely correct.
The reason for the multiple small drives is due to the type of work I'm doing. (based on recommendations from people in the industry)
Of the 2x 1TB 990 Pros: 1 is for the operating system & 1 is for the cache/scratch drive *(the latter will get beat to hell being repeatedly filled & purged with Adobes cache & thus replaced more frequently).
The 2TB 990 Pro drive is my "hot drive" for DLs, etc. (I will likely be upping this to 4GB as I've taken long enough to decide on things that I've saved more).
The 2x N300s are just my onboard physical backups.

One final question, If I go with the 2x 48gb RAM, & it's not enough, can I buy another pack of the exact same ram *(brand, speed, etc)* to increase the RAM?
Or do I have to replace it with a single, 4 DIMM pack of RAM?

Thanks again. Have a great weekend!
 
Last edited:
Of the 2x 1TB 990 Pros: 1 is for the operating system & 1 is for the cache/scratch drive *(the latter will get beat to hell being repeatedly filled & purged with Adobes cache & thus replaced more frequently).

Ah ok. No need to replace it as a preventative measure though. An SSD using typical NAND flash of good quality has been shown to be able to outlast the TBW stated in the specs significantly. So when the 600 TB are reached on the 1TB drive, all that happens is that the warranty is over, but you can still use it normally, it might reach several times that TBW before you ever see any problems.

The 2x N300s are just my onboard physical backups.

I would use external drives for backups (unless you got that covered already). Internal backups don't protect against that many possibilities.

One final question, If I go with the 2x 48gb RAM, & it's not enough, can I buy another pack of the exact same ram *(brand, speed, etc)* to increase the RAM?
Or do I have to replace it with a single, 4 DIMM pack of RAM?

The kit already has a quite ambitious XMP, i would've stayed in the sweet spot i mentioned here, namely DDR5-6000~6400 at 1.35V XMP (examples: Kingston KF560C32RSK2-96, KF560C32RSAK2-96, G.Skill F5-6400J3239F48GX2-RS5K, F5-6400J3239F48GX2-TZ5RK).

For use with four modules, DDR5-6600 CL32 is of course totally out of the question (when it's already ambitious with two), so while you can buy another one of the exact same kit (and hope that it's really 100% identical), it has zero chance of working at XMP then, so you will have to back down to DDR5-5600, if not lower. On the plus side, DRAM Voltage can then probably be reduced to 1.25-1.3V.

So, i think there might be a good chance of getting two kits working (at a much reduced DDR speed) if it becomes necessary for capacity reasons. And should that fail, you could always return both kits and get a matched kit of four modules.
 
@citay/Doc_Bucket
my problem is only regarding ram progs, that were written for checking defective chips, being recommended for stability testing,
i will not say metest will never find any error caused by tweaking ever,
but on 10 rigs (given AM4, not 5) that i have set up (and tweaked) since 2nd ryzen came out, neither with slight instability (RFC/voltages off by a little) nor settings that would barely boot, did memtest86/+ show a single error, while others designed for stability testing, would.
and its repeatable, e.g. returning to jedec/reloading different profiles etc.

so far this seems to be the case, with any ram tool using fixed set of tests/time limited, and not run "forever" tools like HCI or TM5.

and i quote:
Why use MemTest86
Bad RAM is one of the most frustrating computer problems to have as symptoms are often random and hard to pin down. MemTest86 can help diagnose faulty RAM (or rule it out as a cause of system instability).

its fine for us to use in addition, but many folks searching for help might start with a free memtest, run it an think they are fine, when they arent (stability wise),
and start having issue with data getting corrupted 6 month down the road.

@truthntraditio156902db
i would split the drives between brands, WD/Corsair/MSI have some decent stuff, so if there is an issue with a drive/FW, it wont affect all your drives.
make sure to avoid QLC tho.
 
@citay
There has been some confusion lately about what Intel (not MSI) actually means by "1DPC" or "2DPC" when using these terms in their processor specs.
Some say (and der8auer had this confirmed by Intel insider sources) that Intel considers "1DPC" refer to motherboards that physically connect 1 DIMM slot to 1 memory channel, i. e 2-slot motherboards, while "2DPC" refers to motherboards that physically connect 2 DIMM slots to each memory channel, i. e. 4-slot motherboards.
That would mean that Intel memory timings for 4-slot motherboards would always have to be read from the "2DPC" table, no matter if e. g. 2 DIMMs are put in slots that each feed into a different CPU mem channel.

With DDR5 and a 12th gen Alder Lake CPU, that would mean that the official timings for the usual 2 DIMM config on a 4 slot motherboard would not default to 4800, but to just 4400.
Yet my MSI Z790 Carbon WIFI booted up with a default of 4800 in this config, which reflects the common understanding that distributing 2 DIMMS so that each is put in a slot that feeds a different CPU mem channel is "1DPC", regardless if the motherboard has 2 or 4 DIMM slots.

What's your take on this?
 
Last edited:
Yeah, for Intel, the 1DPC/2DPC meaning might be like that, but we are not overly concerned with what Intel write, we know that the official specs for the RAM support by the CPU makers tend to be deliberately conservative, read here.

So when i use 1DPC/2DPC in this thread, it's about the way MSI use it, and there it obviously means if one or two slots per channel are populated on their boards with four DIMM slots. So it's about how everything about the memory system becomes worse when you use all four slots (2DPC) on a board with daisy-chained RAM slots.

We can always have these slight discrepancies where the same name can mean a different thing in a different context. But when we know the context, everything should be clear (most of the time). Another example is dual-channel vs. quad-channel with DDR5, both would be correct in different contexts, see here. If we look at it from the motherboard side, it's a dual-channel setup, if we look at it from the RAM module side, it uses two channels per RAM module, so you could call it quad-channel with just two RAM channels from the board. Some tools will actually show quad-channel with two DDR5 modules, or at least "4x 32-bit".
 
@citay
There has been some confusion lately about what Intel (not MSI) actually means by "1DPC" or "2DPC" when using these terms in their processor specs.
Some say (and der8auer had this confirmed by Intel insider sources) that Intel considers "1DPC" refer to motherboards that physically connect 1 DIMM slot to 1 memory channel, i. e 2-slot motherboards, while "2DPC" refers to motherboards that physically connect 2 DIMM slots to each memory channel, i. e. 4-slot motherboards.
That would mean that Intel memory timings for 4-slot motherboards would always have to be read from the "2DPC" table, no matter if e. g. 2 DIMMs are put in slots that each feed into a different CPU mem channel.

With DDR5 and a 12th gen Alder Lake CPU, that would mean that the official timings for the usual 2 DIMM config on a 4 slot motherboard would not default to 4800, but to just 4400.
Yet my MSI Z790 Carbon WIFI booted up with a default of 4800, which reflects the common understanding that distributing 2 DIMMS so that each is put in a slot that feeds a different CPU mem channel is "1DPC", regardless if the motherboard has 2 or 4 DIMM slots.

What's you take on this?

Btw: I am not so sure about information received from "Intel sources". I've seen official Intel staff on their support forums say that the 1.72V core voltage mentioned in the processor specs would be fine, which it certainly isn't. Buildzoid has an explanation as to where the 1.72V come from (essentially specifying the maximum voltage value that can theoretically be coded by the CPU into a voltage request made to the VRM, but not indicating an actual safe operating voltage)
My common sense tells me that the actual speed, as long as the DIMM itself can sustain it, will depend on the quality of the IMC in that specific die, and the motherboard tracing, among others. So, if you have this kind of prerequsites, it is possible that BIOS uses that headroom automatically and defaults to the given speed. But then, the question may arise, what mechanisms BIOS would use to find that out without manual interference. I do not know much about workings behind the scenes, to be honest, unless it is the actual number of occupied slots that counts, which might be a sufficient indicator to use that headroom.
 
So it again comes down to pulling their necks out of the noose by specifying memory timings so conservative that anything considered "normal" would already be out of spec and potentially warranty voiding.
Now I understand why workstations often run at surprisingly low memory speeds.
 
There has been some confusion lately about what Intel (not MSI) actually means by "1DPC" or "2DPC" when using these terms in their processor specs.
Some say (and der8auer had this confirmed by Intel insider sources) that Intel considers "1DPC" refer to motherboards that physically connect 1 DIMM slot to 1 memory channel, i. e 2-slot motherboards, while "2DPC" refers to motherboards that physically connect 2 DIMM slots to each memory channel, i. e. 4-slot motherboards.
That would mean that Intel memory timings for 4-slot motherboards would always have to be read from the "2DPC" table, no matter if e. g. 2 DIMMs are put in slots that each feed into a different CPU mem channel.

There is no confusion at all.
And don't buy anything you find on the internet!
These days all kind of (young) people share their "wisdom" over the internet and the result is terrible for all.
You can find the Intel official charts and tables here: https://forum-en.msi.com/index.php?...4th-gen-cpus-raptor-lake-ddr5-support.373119/
So there are 2 different things involved here:
- SPC (Slots Per Channel)
- DPC (DIMMs Per Channel)
Obviously the motherboards with 1 SPC (2 memory slots) are faster than those with 2 SPC (4 memory slots).
And the reason is very simple: more memory slots ---> more wires ---> more electrical noise
And more electrical noise leads to lower signal quality, more CRC errors, so obviously lower speed at the end.
BUT ...
No matter what others say, when it comes to memory speed, the difference between a motherboard with 2 memory slots and a motherboard with 4 memory slots is minimal !!!
A top 2 slots motherboard is guaranteed for DDR5-8000
A top 4 slots motherboard is guaranteed for DDR5-7800 (Example: https://www.msi.com/Motherboard/MEG-Z790-ACE-MAX/Specification )

The bottom line: when it comes to memory speed, the motherboard is almost irrelevant.
The main limitation here comes from the CPU IMC.
In case of XMP (IMC overclocking, undertiming and overvolting), the difference between 2 x single-rank and 4 x dual-rank can reach 2000MHz !!!
See again the specs for MSI Z790 Ace Max.
 
@RemusM
but when looking at "records, its usually the 2 slot boards leading. at least looking at ddr4 and before, same with sticks with/without rgb.
might not be much, but its there.
 
but when looking at "records, its usually the 2 slot boards leading. at least looking at ddr4 and before, same with sticks with/without rgb.
might not be much, but its there.

Obviously the motherboards with 1 SPC (2 memory slots) are faster than those with 2 SPC (4 memory slots).
And the reason is very simple: more memory slots ---> more wires ---> more electrical noise
And more electrical noise leads to lower signal quality, more CRC errors, so obviously lower speed at the end.

P.S.
All those records need insane memory and IMC voltages.
Those computers cannot be used for any serious work, not even for games. :biggrin:

P.S.2
Most of the records are achieved with a single small memory module (4 or 8GB). ;)
 
@RemusM
sorry, had been up for 20h and gaming for 16, so conversion from brain-to-comment didnt work properly :biggrin:

my reply was about the minimal difference between 2 and 4 slots boards, which might be true for ddr5, but not so much for ddr4 (on ryzen), at least what i had set up in the last years since it came out, but true, some of it indirectly (MC).
no, not those LN2 #1 spot chaser, but daily driver/gaming rigs...
 
Back
Top