Different undervolting methods with IA CEP enabled, and how they compare to Lite Load

Vassil_V

Member
Joined
Jul 14, 2024
Messages
89
Diving into this hot and controversial topic - undervolting with CEP enabled!
I want to address the elephant in the room first - is disabling CEP potentially dangerous? The short answer is, probably not. I don't really know, and I'm not aware of any evidence that it could be harmful, especially if you have already set sensible settings in your BIOS. This is currently the widespread opinion online, also here in this forum, including with people like citay, who has had lots of experience. Arguing about whether or not CEP is necessary or not is not my goal with this post, I just want to share what I've learned and done.
This is also not intended to be a full guide on how to undervolt, including the basics. Citay's guide is extremely extensive and covers basically everything somebody new to this needs, to get started.

TL; DR - you can check some results and notes here

First a very short backstory, which might provide you with some context.
About a month ago I switched to a desktop PC with a 13700K, from a laptop with a 12900HX, and even before I ordered the components I was already aware of the 13/14 gen issues, so one of my goals from day one was to stick with the basics and follow the official recommendations provided by Intel. Most of them are considered good practice anyway, such as setting ICCMax, proper power limits, enabling C-States and using a power plan in Windows that allows downclocking. IA CEP being enabled is also part of Intel's recommendations, so that's something I made sure is on before I installed Windows, along with applying the rest of the recommended settings, where needed.

My first attempt at undervolting my 13700 was to lower the Lite Load mode as I had read somewhere it does wonders, but I immediately faced a performance hit caused by CEP. Then I read I had to disable CEP in order to properly undervolt using a Lite Load method, but as it was part of Intel's recommendations, I wanted to try a different approach first. With the 12900HX, the only way to undervolt was by using a negative offset as there was no advance BIOS available, so I already had some experience with setting offsets and I just defaulted to this. I tried it with the 13700K and it actually worked great (still does), lowered voltages across the board, temps and power draw noticeably, and there was no performance hit because of CEP.
My Cinebench R23 score with the default motherboard settings is around 29K pts at best, which is enough performance for me, but the problem is the instant thermal throttling at 100C, and hitting the 253W default PL2. Also, voltages spike to 1.46-1.47V during normal usage.
With a -0.125V offset my score went up to 30700 pts, with max power draw 225W and 1.25-1.26V under 225W load. I was happy with this setup so I used it for a few days without issues, then I tried a larger offset to see if it'd be okay. I went with -0.150V which was also perfectly stable, at some point I also set a conservative PL1=125W and PL2=188W and everything was great. Voltages were fine, sometimes spiking to 1.33, but generally under lighter load so no major worries with that. I had tested for stability using y-cruncher, Primer95, OCCT, R23, R24, TimeSpy, and last but not least, through gaming and normal usage, but I watched a Buildzoid video where he mentioned Cinebench R15 is very good at exposing instabilities, so I though I should test with it too. Sure enough, WHEA errors popped up after just 4-5 consecutive runs. I dropped the offset to -0.140V, and it is stable in R15.
Around the same time I started playing The Last of Us Part 1 and for the first time I got a bit concerned by the voltage I was seeing, as I was hitting 1.33-1.34V in-game, and averaging 1.32V, which didn't seem ideal. Just to clarify - it probably isn't a problem, but I wanted to try lower it a bit. So I started experimenting with different ways to lower the max VCore in gaming and also during lighter usage, while keeping CEP enabled. Even though I still have no idea whether it protects my CPU from anything, if I can achieve the results I want with it enabled, I don't see a reason to disable it.

Increasing the voltage offset was obviously not an option, because I had just decreased it from -0.150V to 0.140V. R15 causes me WHEAs when VCore starts hovering just below 1.18V at full load, and -0.150V puts me just in that range. Therefore, I knew what my target voltage under load is - at least 1.18V, but less than 1.19V, so now I needed to find a way to achieve that while maintaining performance, while decreasing the VCore under lighter load and gaming to 1.3V max.

CEP, AC/DC load lines and LLC
If I understand correctly, CEP is triggered by differences between the AC load line (set in mOhms) and the LLC mode (also corresponding to mOhms), where LLC determines how much Vdroop (drop in voltage during heavy CPU load) is being counteracted by the VRM. The AC value lets the CPU know what Vdroop it should expect, so that the CPU can properly calculate the voltage request it should send to the motherboard (at least in theory). If the AC tells the CPU it should expect "x" Vdroop under load, while the LLC allows for "x+5" Vdroop under load, then the CPU effectively gets more undervolted the higher the CPU load is. That's why undervolting by lowering the AC load line is so effective when benchmarking or running heavy loads - it hides from the CPU the fact that Vdroop is expected, so the CPU thinks it's okay with requesting lower voltage as assumes the motherboard will compensate the Vdroop.
If CEP is enabled, this is where it freaks out and starts clock stretching to prevent potential instability, even though the system might otherwise be completely stable and well-performing. This clock stretching effectively reduces the CPU's power and current draw, allowing it to remain stable at a lower voltage, which CEP considers unstable, because it is so much lower than what it expects to receive. So this is why R23 scores can drop by 50% even though you know the Lite Load mode you've selected is stable with your CPU. CEP is not triggered by offsets, because they shift the entire voltage-frequency curve of the CPU, so you can just make it request lower and lower voltages by applying a larger offset, until it is simply unstable. CEP will not kick in as it won't detect a difference between the requested voltage and the supplied one.
However, CEP also seems to have a buffer zone and doesn't kick in unless AC drops to somewhere below ≈67% of the LLC impedance. You can lower the AC load line only, without having a performance hit caused by CEP, just not by much.

The DC load line doesn't directly affect voltage, what it does is to calibrate the power measurement done by the CPU. The DC value in mOhms should match the LLC's impedance in mOhms, so that ideally, when DC and LLC are properly calibrated, VID=voltage supplied to CPU. This ensures proper power measurement, which is especially important if you have a power limit set that's always hit under full load. If DC is set too low, VID will be inaccurately higher, which will lead to inaccurately high power measurement, so you'd effectively power throttle your CPU, on top of the power limits you have set. If DC is set too high, then the VID will be inaccurately lower, which can turn your 200W PL2 into a 205W one, for example. Small differences probably won't be noticeable, but that's the general idea.

So, with all that in mind, what options do we have to undervolt when CEP is enabled, besides just by setting an offset? We have to abide by one general rule - AC should not be set to a value that's below ≈70% of DC=LLC. It sounds simple enough, but it has implications.
If we want to reduce AC to a value similar to a relatively low Lite Load mode, let's say to AC=20=0.2 mOhms (as Lite Load 5 does), DC=LLC cannot be set higher than 20/0,7 = 0.28 mOhms (rounded down). But we have to keep in mind that LLC is applied using presets, so we have a limited number of options for DC, if we want to properly match it to a given LLC mode. Also, going to a lower (as in number, e.g from 8 -> 4) LLC mode (on MSI motherboards, on Asus, e.g., it's the opposite), means that you are requesting from the VRM to compensate more for the Vdroop. To do that, the VRM has to artificially boost the voltage to the CPU when the CPU is under load, but when the load suddenly goes away, this additional voltage applied by the VRM can cause a sudden voltage spike that shoots above the CPU's target VID (called an overshoot), which technically has the potential to be harmful overtime, as it can deliver excess voltage to the CPU. How big the risk is depends a lot on the quality of the motherboard, but it is a risk nonetheless. This exact topic is not something I've researched too much, but the general consensus is that for most people an LLC mode that allows a healthy amount of Vdroop is the better option. I'll appreciate comments on this from people who are using flat LLC or strong modes, what is your experience and setup, and what benefits do you find in this.

Going back to the lowering AC with CEP enabled problem, the above would mean that we have a narrow window to work with for DC=LLC, in my opinion somewhere between 0.4 - 0.7 mOhms. Any lower than that, you'd be asking the VRM for a significant Vdroop compensation. Any higher than that, you can just go with the default DC=110=LLC=Auto, and you don't have to worry about matching DC to LLC, but at the same time you can't lower AC as much as you might want to.

But if you want to worry about matching them... (like me), see below.

With the latest bioses, especially the ones with 0x129 microcode, MSI's motherboards mostly (if not exclusively?) default to the "Intel Default" settings, which have AC=DC=110 (1.1 mOhms) and LLC on Auto. What this should mean is that DC=110=1.1 mOhms is calibrated for LLC=Auto. An important note here is that I've tested LLC=Auto and LLC=8 on my motherboard, and they have the exact same Vdroop behaviour, and other people,with different MSI motherboards, such as the Z790 Tomahawk, have also confirmed the same.
So, this means that with DC=110 (1.1 mOhms) and LLC=Auto=8, VID should match the voltage supplied to the CPU, right?
On mine, and many other MSI motherboards, the only sensor which is available to us for checking the voltage supplied to the CPU is VCore. Unfortunately, it is said to not be completely accurate. According to user SgtMorogan (but not only) on the overclock.net forum, "Vcore will always read somewhat higher than reality due to the impedance between the die and the sensor.". This can be found in this topic, which is widely shared in MSI motherboard-related discussions online. In there, you can find two different tables with supposed impedances, one for Z690 motherboards and one for Z790, with different values in mOhms across the LLC modes. One user with a Z790 Tomahawk board has tested different LLC modes and calculated the supposedly matching DC values. What's interesting is that according to him, LLC=8 pairs with DC=98 (0.98 mOhms), not 110 (1.1 mOhms), as we might assume, given the default settings and the fact that LLC=Auto=8. Additionally, in the same thread, on page 3, user FR4GGL3 has shared the following:

"I asked MSI a few weeks ago. The Questioan was which exact Numbers in mOhms equal to the 1 to 8 Settings of LLC in the Bios.
The answer was:

The “CPU Loadline Calibration Control” settings (Auto, Mode 1 to 8) are fine tune results by RD team’s know-how, so please allow us to keep them secret.

The Auto setting would meet the Intel suggested values.
If user wants less voltage drop (more voltage compensation) when CPU is under high loading, please select Mode 1.
The bigger Mode number the more voltage drop.


So I would say "Auto" is 1.1 mOhms. At least on my Z690 Board. That is also what is listed here on the first few entries"


When I put full load on the CPU using the Intel Default profile with AC=DC=110 and LLC=Auto, VCore always reads higher than VID. I logged data via HWInfo and calculated the average differences across a few short runs of OCCT and R23, by first calculating the difference between VCore and VID for each polling point, and then the average difference, and the result is almost always exactly 0.013V, or 13mV. The runs based on which I've calculated this begin at PL2 and then PL1 kicks in, and I've taken the average of the VCore-VID difference based on all data. But even if I only review the PL2 or PL1 data separately, it is almost always exactly a 0.013V difference, +-2-3mV at most. Setting DC to 98-100 actually causes VID to almost perfectly match VCore. So what does this mean?

Option 1 - assuming that MSI have properly calibrated LLC=Auto to DC=110, being the default, then VCore is indeed inherently inaccurate and always shows higher than it should, about 0.013V higher on average, at least on my motherboard.
Option 2 - if MSI are incorrectly defaulting to DC=110, while LLC=Auto being 0.98-1.0 mOhms, this would more or less explain the lower VID compared to VCore at stock configuration.

I am willing to trust that MSI have not been incorrectly setting DC and LLC by default, as this doesn't even have to do anything with Intel. So, trusting the default settings means that if I want to change LLC to another mode and calibrate DC accordingly, I have to aim for the same 0.013V difference between VCore and VID that I'm seeing with the stock configuration. After some trial and error, I've found out that on my motherboard, LLC=6 paired with DC=68, achieves the same 0.013V average difference as 110/LLC=Auto, under the same conditions.
In order for VID to match Vcore with LLC=6, DC should be set to ≈60, but I've found this impacts performance by a small margin, and I believe it's because it's effectively lowering my PL2 limit.

So, to recap:
- Lowering the AC load line, while keeping LLC=DC=110=1.1 mOhms, is basically what the Lite Load modes do and it's especially effective when high load is put on the CPU. A lot of Vdroop is allowed, but the CPU doesn't know it, so it's not asking for voltage to compensate for it, leading to a significant undervolt during high-load. CEP doesn't like that so it starts slowing down the CPU and reducing the power and current going to it.
- We can undervolt with CEP enabled, it's just more complex and requires a different approach.
- The ground rule is that AC cannot be <70% of DC/LLC; and DC should be calibrated to LLC, so that the VID-Vcore relation is the same as when using the default settings, after measuring it with the most precise sensor you have available.
- Alternatively, you could just go with VID=VCore, as even if this leads to higher inaccurate power reading, you could simply bump up your power limits by a few watts and nobody has to know about it. :biggthumbsup:
- We could technically go as low as we want with AC, as long as we don't break the above rule, but this naturally means that LLC also has to be made stronger (compensate more). Going too low with AC will quickly require an almost flat LLC, which is generally not recommended for most people unless you really know how to set it up and have a good high-end motherboard. It also has other implications too, but I won't go into details.

If we don't want to set a very strong LLC, we have to keep AC at 30-35 the lowest, so that we can set DC=LLC to at least 40. I have not experimented with this range, but went for 1-2 steps above, aiming for LLC=6. It still allows for healthy Vdroop and doesn't have too much compensation. As mentioned above, it seems to match with DC=68, at least as long as I can trust the measurements.

I mentioned that the AC load line undervolt method works the best under high CPU load. This is because even though reducing AC also impacts the VID calculation without load, due to some mysterious way the CPU calculates its VID - using "predicted current", a lowered AC doesn't have the same great undervolting effect when the CPU load is not high enough to induce Vdroop. At least this is how I interpret it. So, what you end up with is higher voltage during light load compared to when you undervolt using an offset, and this can become especially noticeable during gaming. To counteract this, we can combine the two and add a negative offset to a lowered AC load line. This gives us a lower base VID + offset (config 3 below); or slightly lower base VID + surprise Vdroop for the CPU + offset (config 2 below).

I've tested 3 different undervolt configurations, all with CEP enabled, and have compared them with the default Lite Load 5 preset, with CEP disabled. The results illustrate well the benefits of each undervolting method. Here is an Excel file with all the test results, baseline information and some notes.

Config A is with the "Intel Default" lite load profile, with AC=DC=110, LLC on Auto and adaptive+negative offset set to -0.140V. This is my OG setup which I still like due to its simplicity and generally good results. Its only problem is the 1.33-1.34V spikes that can happen during gaming (in specific games).
Config B is a slightly modified version of config A, exploting CEP's buffer zone. Here, AC=80, DC=110 and LLC=Auto. Because AC is reduced from 110 to 80, I've also reduced the offset a bit to -0.125V, and this gives me almost the same VCore under load, but max VCore is lower due to the lower AC, which doesn't cause the CPU to calculate as high VID requests anymore. No impact in performance compared to config 1.
Config C is an experimental one where AC=DC=68=LLC6 (set based on the described above) and again an -0.125V offset. Here we have less VDroop, but also AC is set lower, so the same offset of -0.125V puts me at more or less the same VCore under load as config A and B. However, during light load this gives me even lower max VCore spikes. No impact in performance compared to configs A or B.
Config D is just Lite Load 5 with CEP disabled, so AC=20/DC=110 and LLC=Auto. This gives me higher max VCore spikes than config B and C, but generally performs slightly better at full 188W load. You will see in the file that in Cinebench R23 LL5 achieves on average around 100-150 pts higher result compared to the other setups, but this is not a significant difference. The most potential it has is in an OCCT-like workload, where LL5 could draw noticeably less power, but this seems to be dependent on the specific type of load. I should also note that this is the lowest perfectly stable Lite Load mode for my CPU, as with LL3 CB R24 crashes soon after I start it, and I don't think LL4 will be stable in R15, as the Vcore with it drops to the low 1.170s.

Cinebench R23
This is an interesting one because all four configurations perform similar to each other, but with clear differences based on the power limit.
- At 188W, config D (LL5) has higher average effective clocks compared to the rest, by about 50MHz for the P and E cores, therefore scores a bit higher.
- At 125W, the situation changes and configs A-C perform better, with higher average effective clocks. This sets a trend - the lower the load is, the better the offset configurations perform compared to the Lite Load one.
- The short run R23 scores were very close to each other, with configs A-C being around 30200 pts, and LL5 around 30300 pts.

OCCT Stability test
Here the Lite Load 5 setup is a clear winner at PL2, and it seems that in a heavy load of the type OCCT generates, AC<DC configurations excel due to the large unpredicted VDroop. Because of the low AC value, the CPU doesn't expect much Vdroop, but the OCCT load seems to cause a lot of it, so the bigger the difference between AC and DC/LLC is, the lower the VCore will be.
One thing to note is that the E cores didn't go past 4.1GHz with LL5, while they got up to 4.2GHz using the other three configurations.
Also, I don't understand the mechanism behind it, but the LL5 configuration had a significantly lower power draw at PL2 - 13W less than the runner up, config B.

Config B, where AC<DC=LLC is at second place at PL2, so it seems the AC load line undervolting is definitely the way to go if your use cases generate CPU loads similar to the ones OCCT does.

At PL1, they all effectively perform the same.

Geekbench 6
I tested this because it's a very light load for the most part, but with sharp load spikes here and there, so I thought it'd be a good test of max spikes in Vcore, current and power draw.
Here we also see that the two configurations with DC/LLC=110 + an offset see much lower max power draw spikes compared to the LL5 preset and the DC=LLC=68 + offset modes. LL5 has the highest average VCore, while the VCore spikes are within 10mV range across the four configurations.
Scores were within margin of error, around 2990 pts for single core and 19680 pts for multi core.
The win goes to config B for having the lowest metrics across the board.

Assassin's Creed Odyssey
In this game, Lite Load 5 has by far the highest average Vcore. This resembles the higher average Vcore during Geekbench 6, and is maybe related to the lower average current and/or power draw in these two scenarios. This is also typical during general usage without heavy load. LL5 always maintains the highest average VCore, because there is no offset applied to the V/F curve, and the low AC load line doesn't lead to much of an undervolt during low-load scenarios, when no VDroop is happening.
The win goes to B or C because of the lowest average VCore.

The Last of Us Part 1
In The Last of Us, this time config A, the 110/110 + offset configuration, had the highest average Vcore. Config D/Lite Load 5 still has the second highest average Vcore, and perhaps this game's CPU load is a middle ground where the VDroop is high enough for config D to have lower average Vcore than config A, but not high enough so that the lack of a V/F offset is compensated enough to match config B and D.
The win goes to B or C because of the lowest average VCore.


Conclusions:
Can we undervolt with CEP enabled - definitely! It is certainly more complicated and finicky compared to simply reducing AC and disabling CEP, as there are now multiple parameters to account for - AC, DC, LLC, and offset. But the results can be very good, performance is almost identical compared to Lite Load 5, and the voltage is lower in gaming and light usage.
In Cinebench R23, LL5/config D technically performs the best, no doubt about it, but the performance difference is so negligible it can never be felt. However, LL5 had a significant advantage in the OCCT stability test. Lower VCore, lower power draw, lower temperature, it was a clear winner there. This brings me to a conclusion I never though might be the case - perhaps, there is no best undervolt method (even complexity aside). Some will give you lower voltage in gaming and light usage, others will excel in specific workloads that tax the CPU a certain way. At least this is how I interpret my results, which I admit, are not based on an extensive suite of benchmarks and tests. I could go back and do additional tests with the same configurations, probably first on my list would be a 10-minute R23 run and a 10 minute R24 run with each, but this would take me a lot of time.
Anyway, another thing I think is visible is that basically all four configurations are very capable, and I'm quite happy with the results overall. Cofigurations B and C are the most interesting to me because they combine a reduced AC load line with an offset, and mix the best of both worlds. I think they're great for most people, as they provide good performance and temperatures, and lower the overall max VCore. But the very big difference between AC and DC/LLC that's present with LLC5 seems to be the best choice for optimizing power draw and temperatures, for anybody whose use case is heavy CPU loads such as OCCT, which create heavy Vdroop scenarios. For some reason the same doesn't apply to R23, so if somebody has an idea what's causing this different behaviour, please share.

Hope you enjoyed the read!
 
Last edited:
Well, the limits are on his screenshot 😉, and the cooling solution is not that crucial to know here, because it's simply about there being a high enough delta between his maximum DTS core temperatures and the TjMax of 100°C, yet there was still thermal throttling being triggered (possibly only for a split-second at a time). We have hypothesized before that "Intel Turbo Boost Max Technology 3.0" might be the culprit here, quick boosting of one or two cores (tend to always be the same ones) leading to split-second thermal throttling of them, while overall CPU temps stay manageable. In other words, i would try disabling that option and checking the outcome.




Yes, any time your CPU is thermally throttled at ~150W, with any AIO (unless it's one of those useless 120mm ones), there will be a fundamental problem. BTW, you can watch this video for some common mistakes/downsides of pre-built PCs. It is shocking that an incorrectly mounted AIO was not detected during testing before it was shipped out to the customer. This would be immediately apparent to anyone running a couple minutes worth of a stress-testing tool (which is part of building and configuring any system). Speaking of which, you could take a look at my guide for some further optimizations, or at least, perform a Cinebench run with HWinfo in the background (but set units to °C please), then reply in my thread and we could go over the results, so as not to go too much off-topic here.
Got same results disabling "Intel Turbo Boost Max Technology 3.0" but same score on CineBench CB23 :( so maybe that thermal throttling was not the full reason for the low values.
 
Did you try a manual all-core underclock? If not, that might be interesting to see how it affects your score. Just a very mild underclock might do, maybe -1 across the board, but lock all cores to the same. If I were a betting man, I'd guess 29,750 and far less thermal throttling. Of course, I know very very little about your system, so I'm sticking my neck out here in readiness to get it chopped off! :rolleyes: But you never know... If I'm right, I might score me-self a couple of more "Likes" :-)

P.S. Please just lie if you have to...
Did underclock and lock to 52 P and 41 E ... cpu package and core max hit 92 and 90c respectively for first 2-3 passes, then settled down to constant 78-79c, no throttling. Scores just about 28,000 so no real change score wise. oh and cooler is a deepcool ak620.
 
Did underclock and lock to 52 P and 41 E ... cpu package and core max hit 92 and 90c respectively for first 2-3 passes, then settled down to constant 78-79c, no throttling. Scores just about 28,000 so no real change score wise. oh and cooler is a deepcool ak620.
Wow! These Raptor Lakes really kick your butt (Can I say "butt" on here?). They make you really work for it, don't they? I guess maybe with the non-manual OC, you are just bouncing off the limit of your cooling capabilities. That's my guess. You might get some benefit from undervolting (more), but I doubt there's much more to be squeezed out of your 29K settings. You just might have to take a page out of Arctucas's book and go with a direct die, custom water loop, hooked up to a window A/C unit. No...don't laugh...that's exactly what he's got.
 
While I hate to admit it, I think CEP, and it’s clock-stretching functionality, is the future for Intel. It sounds like AMD and Nvidia have already built their entire voltage management approach around it. This might well turn out to be the very last Intel generation where you can configure with it or without it depending upon your preference.

Thanks, I hadn't known this. If true, then this is the basis of the undervolting guide of the future. (though it is only recently that MSI and Gigabyte released BIOS updates that let us disable CEP with non-K 14th gen Intel chips.) Just one thing I would emphasize when writing guides for a general audience: Already even the simplest undervolting actually speaks to a small percentage of users, because it presumes a certain uncommon level of experience and comfort with the underlying system setup (i.e. under/before the OS). (Merely updating the BIOS is an issue for many users -- hence in some pre-built systems BIOS updates get automatically installed by the OEM's own update program.) Moreover, BIOS updates do things that are surprising to casual users, such as reset without warning the values one had previously set in the BIOS--and sometimes do things that are surprising even to experienced users (e.g. disable without warning an Intel microcode fix....).

Of course with BIOS changes (especially the more involved undervolting kinds), it is possible for users to brick their systems.
One major concern I might have with manually setting AC and DC using the advanced mode, is that apparently (and I realized that yesterday), the BIOS would let you set an AC value of up to 1000. Now, I don't know if it would allow you to save the settings, but I accidentally typed in 680 instead of 68 and it didn't light up in red immediately. :X Not sure what exactly would happen if somebody sets AC to 680, but I am pretty sure it won't be healthy for the CPU.
Yes, it has been done! (though fortunately the board in question had a 1.732V limit, though that "won't be healthy for the CPU" either):
With manually changing AC Loadline, it is a regrettable that mobo makers do not place the decimal point in the same place (or perhaps it is regrettable that the mighty decimal separator is a tiny and easy to miss symbol--in medieval times, it was a bar & more obvious--the rationale that presumably led MSI to go with non-decimal-point LL settings). So for example, here is someone following a guide carefully or rather literally and entering 60, when they should have entered .60, with bad results, esp. since the BIOS, not designed with the layman in mind, does not have many safety rails against ID-10-T errors:

https://rog-forum.asus.com/t5/intel...mohms-means-more-voltage-z690-tuf/td-p/915672
I set AC/DC load line values to 60 mOhms. But the article which is related to a MSI motherboard means 1/100 of the Asus value. So it means 0.6 mOhms for Asus and 60 for MSI. ... The board has set the shown voltage of 1.732V. The idle temps within bios has been more than doubled. Clear indicator this crazy voltage has been applied.

But to keep this in perspective:
https://www.bugsnag.com/blog/bug-day-race-condition-therac-25/
(In contrast, bricking the pc might be in some cases a good thing, perhaps even overall and globally.)

This is all well known, just emphasizing it. Unfortunately undervolting became a necessity with Raptor Lake (more like non-over-volting). As the solution for a general audience, I don't see why Intel and mobo makers couldn't have a saner semi-certified profile system, the way that XMP/EXPO profiles have saved many of us from tedious fiddling with memory OC settings. At any rate, the BIOS should be able to block insane or "never right" settings like 60 mOhms for AC Loadline, at least without popping some skull and crossbones confirmation in red caps.
 
Last edited:
So am I right in thinking that MSI’s adaptive offset undervolt is applied to Vid requests, same as the Gigabyte setting BZ uses in the recent video (posted a few messages back)?

If so, it seems BZ would favour something akin to config B, Vass?
 
Got same results disabling "Intel Turbo Boost Max Technology 3.0" but same score on CineBench CB23 :( so maybe that thermal throttling was not the full reason for the low values.
Is Turbo Boost Max 3.0 supposed to do anything at all in an all-core benchmark like R23? I would have expected ITBM 3.0 to kick in only once you've got a low thread count workload, so it can push the two favored P-cores.
 
Wow! These Raptor Lakes really kick your butt (Can I say "butt" on here?). They make you really work for it, don't they? I guess maybe with the non-manual OC, you are just bouncing off the limit of your cooling capabilities. That's my guess. You might get some benefit from undervolting (more), but I doubt there's much more to be squeezed out of your 29K settings. You just might have to take a page out of Arctucas's book and go with a direct die, custom water loop, hooked up to a window A/C unit. No...don't laugh...that's exactly what he's got.
I suspect you're spot on about the cooling. DeepCools' claim of this cooler can handle 260w TDP I take with a very very large grain of salt aka a salt lick. I don't game, just photo and some video ;editing so I can't complain to much. It still beats the snot out of my old Z97 and i5-4690K which is now my backup just in case.
 
So am I right in thinking that MSI’s adaptive offset undervolt is applied to Vid requests
Short answer is yes.

I'm actually trying to unpack a little more info about what goes into the VID. I recently realized just how ignorant I am about this very important starting point. And, of course, I'm already running into contradictory information. But, from my understanding (and in a grossly simplified world) VID adjustments include AC_LL, temperature, and manual offsets. There's also an adjustment for IccMax X "A Mysterious Modifier" (likely to remain an Intel secret), which could be at the very heart of our collective misery when it comes to these voltage spikes. I hope that answers your question. If not, I'll try to get CiTay on the line. :typing:
 
Is Turbo Boost Max 3.0 supposed to do anything at all in an all-core benchmark like R23? I would have expected ITBM 3.0 to kick in only once you've got a low thread count workload, so it can push the two favored P-cores.
I have thought the exact same thing. I haven't thought about it too hard, but it does seem like somehow TB3 is doing something when in R23.
In BuildZoid's immortal words "If it wasn't weird, it wouldn't be Intel."
 
Short answer is yes.

I'm actually trying to unpack a little more info about what goes into the VID. I recently realized just how ignorant I am about this very important starting point. And, of course, I'm already running into contradictory information. But, from my understanding (and in a grossly simplified world) VID adjustments include AC_LL, temperature, and manual offsets. There's also an adjustment for IccMax X "A Mysterious Modifier" (likely to remain an Intel secret), which could be at the very heart of our collective misery when it comes to these voltage spikes. I hope that answers your question. If not, I'll try to get CiTay on the line. :typing:
It does, thank you, boss ❤️
 
Nice timing.


Right off the bat, not quite agreeing with his introduction that undervolting with CEP off is the "wrong way" and with it on is the "right way". This works on the premise that keeping IA CEP enabled is somehow vital for the CPU's health. But at 2:55 mins we already get the concession that could make the entire video somewhat obsolete (or rather, we'd first need a different video showing what makes IA CEP so important):

"Now this might not actually be necessary"
"I don't know for sure exactly what causes 13th/14th gen CPUs to degrade"
"It's just the fact that the CPU sometimes sends VID requests for like 1.6V and then it gets 1.6V"
"The Intel documentation doesn't actually explain what CEP is supposed to prevent"

Yes, it could be slightly unfair to cite verbatim from a stream-of-consciousness video. But nobody has shown me to a reasonable degree yet what makes IA CEP so essential in preventing degradation. Thanks to some people here on the forum (not least the OP but also others), we knew for a long time that undervolting without triggering IA CEP was perfectly doable, it's just more effort. To the point that you almost need a long video to take the user by the hand and explain the whole process.

So the thing is, we need something that Average Joe can somewhat follow. This is exactly what this reply was about:

Already even the simplest undervolting actually speaks to a small percentage of users, because it presumes a certain uncommon level of experience and comfort with the underlying system setup (i.e. under/before the OS). (Merely updating the BIOS is an issue for many users -- hence in some pre-built systems BIOS updates get automatically installed by the OEM's own update program.) Moreover, BIOS updates do things that are surprising to casual users, such as reset without warning the values one had previously set in the BIOS--and sometimes do things that are surprising even to experienced users (e.g. disable without warning an Intel microcode fix....).

(BTW lopez, in your post, it's better to include the usernames of the people you quote from)

So, a guide which really includes all the tricks, the amount of users it reaches gets smaller again, but we have a huge userbase to deal with here. For some of them, yes, they are not sure how to update the BIOS, or switch it to advanced mode (the latter i already mention in passing in my guide, probably have to add how to update the BIOS too, to at least have the newest microcode).

The main problem is that each and every CPU is truly individual. So you can never recommend a bunch of fixed settings, it will always require a guide. To find the right compromise of writing a guide that both novices and advanced users can extract the level of information they desire, this is an art that is almost impossible to master unless you write different versions of the guide, or film different version of the video. Again, depending on the complexity of the guide, it can become increasingly difficult to break it down for the absolute novice. This goes for all of my guides, either i explain a concept a bit more in-depth to give more information that you could find with a cursory Google search, or i keep it very generalized and then it barely has any benefits over some run-of-the-mill article you may find about the topic.


at least without popping some skull and crossbones confirmation in red caps.

These kind of warnings never work, and they have bipolar disorder. Just a few short months ago, most motherboards' BIOSes would sort of gently push the user towards maxed out limits by default, and having all the protections disabled. About the individual values, consider this: With XMP enabled, the BIOS can easily raise some IMC-related voltages sky-high, while simultaneously having a low threshold for the red=dangerous color when you raise certain voltages yourself (mentioned it before, like here, funnily enough also mentioning buildzoid and what crazy high voltages he likes to use in his videos 😉).

This is the same on Gigabyte etc., the thresholds for the settings, when the font becomes red/purple/whatever are all over the place. If you want to rely on the BIOS for some guidance there, you're lost. And the board makers don't have the CPU safety in mind first and foremost. They want to have a low number of support cases, so they prefer settings that can keep a wide range of possible configurations stable, at the cost of things like efficiency.
 
Last edited:
I can hear it now. All around the world, Dad's are asked by their sons and daughters, "Dad, where were you when the great CEP wars broke out in the 2020's?"
No, seriously. It's great to see this CEP debate staying alive because I don't know about you, but I'm thoroughly enjoying it. We have some great minds at work on this forum and it's good to see such healthy and respectful debate. Bravo! It's the only way we are ever going to try to understand these silly Raptor Lakes.

And, while I'm on the topic of such things... In the spirit of friendly debate and competition, I've been cooking up something I think might be both fun and educational. I hope that, if and when, I launch my new initiative (maybe two weeks out) that I can count on you regulars to contribute. Who knows, we might just have a little fun with this one. And who wouldn't want that given how much hair loss these damn chips are causing us!

In the meantime, ladies and gentleman, may I suggest you get busy trying to get those R23 scores up. You never know, they might turn out to be important...
 
Screenshot 2024-08-17 at 21-40-44 Different undervolting methods with IA CEP enabled and how t...png


You mark the text you want to reply to with the mouse, then there is a little "Reply" button popping up which adds everything to your reply box. 😉
 
I have thought the exact same thing. I haven't thought about it too hard, but it does seem like somehow TB3 is doing something when in R23.
In BuildZoid's immortal words "If it wasn't weird, it wouldn't be Intel."
Intel have so many piece of sh*t mechanisms piled on top of each other in their CPUs, and they fail to adequately document them. There's so much guesswork required to get anywhere.
These are truly enthusiast CPUs, because anyone else can't get them to work properly.
 
Hello, newbie here. OP thanks for your hard work gathering info.
I have 13700k PRO-Z690 A 7D25v1I1(Beta version),
I have noticed this in my case:
With CEP enabled I am unable to undervolt the CPU with Vcore curve, someone knows the reason? it's just the CPU is always getting 1.20v under full load r23cinebench. Even tried -0.2mv undervolt, but the Vcore gets the same voltage, like the voltage is added back when CEP is enabled. (I have tried with undervolt protection disabled, but same result)
Without CEP enabled, I am able to undervolt as much as I want, obviously system comes unstable at some point if I go further away.
 
You mark the text you want to reply to with the mouse, then there is a little "Reply" button popping up which adds everything to your reply box.
Ah great, thanks! I can do offset undervolts, but I choke on obvious-as-the-sun idiot-proof "CLICK ME!" things.

And the board makers don't have the CPU safety in mind first and foremost. They want to have a low number of support cases, so they prefer settings that can keep a wide range of possible configurations stable, at the cost of things like efficiency.
True, but there could be (--and haven't we seen a bit of movement in this direction lately?) Intel-mandatory default profiles that are tighter and better, though it would require more cooperation with their partners. Or, at least have two reasonable pre-set configs, one more efficient but possibly less stable, versus max stability. (XMP can come in multiple profiles, though not helpfully labelled.) But it's a good point, that an efficient default would mean more support cases (since efficient profiles tend to be less stable with low-binned cpu's). otoh, the manufacturer could prohibit RMA returns, beyond the MBG period, unless the system was also unstable with its own "max stability" profile. The profiles could be off for other purposes, such as benchmarking, or there could be extreme/baller/devil-may-care/"kiss your warranty goodbye" profiles, to satisfy the competitive race.
(--but maybe this is too impractical or costly to implement. Of course massive Raptor RMAs are also costly.)
 
This is the same on Gigabyte etc., the thresholds for the settings, when the font becomes red/purple/whatever are all over the place. If you want to rely on the BIOS for some guidance there, you're lost. And the board makers don't have the CPU safety in mind first and foremost. They want to have a low number of support cases, so they prefer settings that can keep a wide range of possible configurations stable, at the cost of things like efficiency.
Which brings us down to the core problem: this issue will persist unless CPU manufacturers return to delivering the advertised performance at fool-proof and safe settings, which mainboard manufacturers are required to set by default. If you want to squeeze out more, be my guest, but at your own risk.
This nonsensical race of chip makers for synthetic benchmark numbers in unrealistic setups, with high-bin "golden samples", liquid nitrogen like cooling and internal R&D tweaking knowledge, needs to stop.
 
Intel have so many piece of sh*t mechanisms piled on top of each other in their CPUs, and they fail to adequately document them. There's so much guesswork required to get anywhere.
These are truly enthusiast CPUs, because anyone else can't get them to work properly.
It's why I'm always so tempted to just implement an all-core OC, even if it's a slight downclock. What you get for that is (a) consistent performance when you run R23, etc., (b) tighter VIDs, even to the point of where you can just pick one core to watch, (c) predictable Vcores, (d) easy to know if degradation occurred, and (e) like you say, you never have to worry about all these mysterious algorithms like TB3, TVB, ABT, eTVB, etc. It's a "dog's breakfast!" Complete madness. It's time Intel [worked with their board partners] to consolidate all of these boost algorithms into one easy to read BIOS screen, where you get to control the boost temps, frequencies and cores from the point of their defaults. Now, wouldn't that be nice?
 
Last edited:
Completely agree: still, LiteLoad modes are there, but even then you’d need to possess the curiosity to go looking on MB specific boards to find Citay’s guide, or one like it.

It’s an absolute shitshow and I will not be buying intel again because of it. With the 7800x3D, for stance, you chuck it in, slap an air cooler on it, and it just works. WTF are intel playing at?

All of that said, I am very grateful for Vass and Citay’s guides. I’m looking forward to imposing Vass config B tomorrow and testing. I will say that a 0.125v offset is probably too aggressive. Going to start with 0.75 or 0.5 and increase from there. I just want reduced temps and VCORE with stability. I’m not looking to min-max everything to the nth degree.

I'll be very curious to hear how it goes with your 14700K!
Regarding the 0.125V being aggresive, it's actually even on the safer side for my CPU at least, as it is also fully stable with -0.140V in config A. I'm comparing them directly because the way I've paired config B's offset with the AC load line results in mostly the same VCore under load as with config A, but A has even lower min VCore drops in idle, down to around 0.668V. So config B is generally on the safer side than A, because of the slightly higher min VCore. And at the same time I know that A was stable even with -0.150V as far as min Vcore goes, because the only instability I faced with -0.150V under R15 load, and specifically during the run itself, not between each loop when transients happen.
There are a lot of variances across CPUs though, I've even seen people running -0.160V offsets stably for a year and never experiencing any instability issues, so until we try we can never know how far stability stretches to.
So am I right in thinking that MSI’s adaptive offset undervolt is applied to Vid requests, same as the Gigabyte setting BZ uses in the recent video (posted a few messages back)?

If so, it seems BZ would favour something akin to config B, Vass?
I haven't watched BZ's video yet, but I'll check it out soon. I see FlyingScot has already replied to your offset question. :beerchug:

Did underclock and lock to 52 P and 41 E ... cpu package and core max hit 92 and 90c respectively for first 2-3 passes, then settled down to constant 78-79c, no throttling. Scores just about 28,000 so no real change score wise. oh and cooler is a deepcool ak620.
Hm, I actually have the exact same cooler as you. CPU package hits 92C at 200W PL2, right? This sounds about right, assuming your ambient temp is around 22-24C.
But yeah, there is no way the AK620 can handle 260W as DeepCool say, don't know how they have come up with this number.
Regarding the Cinebench score, as you're talking about "first 2-3 passes", I assume you're running the 10-minute test, right? In this case your score is completely normal, because it's limited by your PL1. The scores I've shared are for short runs, I've noted this in the excel file. For a 10-minute run with PL1=125W, my score is between 27500 and 28000. I guess with 150W PL1 I'll be somewhere around the 29000 pts ballpark too.

The reason I've not tested 10-minute runs is firstly because it takes a lot more time to test all setups, but also because as long as there is no thermal throttling, the short run score at 188W will basically be within margin of error difference compared to the long run score, as long as PL1 was also set to 188W. I have actually tested this with configs B and C (as part of some stability tests), and the 10-minute scores with PL1=2=188W were again in the low 30000s. I prefer my PL1 set to 125W because of the lower temperatures, but sharing the long-run R23 score, affected by PL1, will not bring a lot of additional value to the post.

As a general note, one thing we should keep in mind when comparing R23 scores, is that any background activity affects them, so comparing different PCs with different Windows installations is a bit tricky. Windows Update/Defender/Search Indexer (pick your poison) may be doing something in the background, or you have some apps open, or perhaps you've increased HWInfo's polling frequency. All those can negatively impact the score, and especially the Windows background processes sometimes run randomly and can cause variances across different runs. I think this may be part of the reason why you're seeing a lower result, and why one run gives you 28000, then shortly after you get 29000, but of course there might be something else going on too.

It's why I'm always so tempted to just implement an all-core OC, even if it's a slight downclock. What you get for that is (a) consistent performance when you run R23, etc., (b) tighter VIDs, even to the point of where you can just pick one core to watch, (c) predictable Vcores, (d) easy to know if degradation occurred, and (e) like you say, you never have to worry about all these mysterious algorithms like TB3, TVB, ABT, eTVB, etc. It's a "dog's breakfast!" Complete madness. It's time Intel [worked with their board partners] to consolidate all of these boost algorithms into one easy to read BIOS screen, where you get to control the boost temps, frequencies and cores from the point of their defaults. Now, wouldn't that be nice?

I also had Turbo Boost 3.0 turned off for a while as I also thought it might cause some unnecessary voltage spikes, and I truly don't care whether my CPU boosts by 100Mhz more on two clocks, 0.5% of the time I'm using my PC. At some point I noticed in the per core VIDs that P-core 0 almost always has the highest VID. So as TB3 works with P cores 4 and 5, I though I should try turning it back on to check if it would affect my VCore spikes - not at all. VCore max is basically exactly the same as it's mostly P0 that's the determining factor, and curiously, sometimes it's one of the E cores. I also get 30-40 more points for single core in R23, that should mean something.. right?
 
Last edited:
Back
Top