Arrow Lake Voltage Management is a step in the right direction

FlyingScot

Well-known member
Joined
Apr 30, 2024
Messages
1,425
Anyone interested in Arrow Lake should definitely watch der8auer’s review. In it, he goes into quite a bit of detail about the most intriguing new feature being introduced with Arrow Lake. The DLVR mechanism (or module) supports different voltages (i.e. Vcores) for each individual IA Core. In other words, the days of the highest VID determining the voltage for the entire CPU (a big issue with Raptor Lake) could be well and truly over.

The upside is better efficiency for gaming-like workloads, not to mention the voltage tuning possibilities. [It looks like CiTay might have to publish a new guide or two!] However, there’s always a catch when it comes to new technology. The downside of the DLVR silicon is that it consumes its own energy. And this energy consumption increases relative to the amount of current that is flowing. In other words, the number of active cores is an important consideration.

Cinebench R23, with its all-core workload, is a perfect example of an application that will lose efficiently relative to a non-DLVR setup. For this reason, it is expected that users will be able to configure the BIOS to bypassed the DLVR mechanism if they so choose. Although, with most of us being gamers, I doubt many will want to disable it.

 
So, question about that video, something I see everywhere. Are these comparisons done with stock CPU's, with unlimited power limits (against intel's recommendation)?

In this video I see this, the R23 benchmark. Yellow is score, blue is power consumed.
My 14900k runs at 230 watt and has a 39000 score, how come this KS consumes 80 watts more with only 500 points extra?
How am I supposed to interpretate this benchmark in comparison to my own rig?

It could also mean that the 285k could reach 42000 score with maybe 180 watt?
Then again, this CPU seems to only excell in benchmarks so who cares.
1729857500175.png


Furthermore, how do CPU's suck up 250 watt during cyberpunk in his tests. I've run plenty of games but my CPU never uses more than 120, even with my GPU running on full load and FPS uncapped to see when it bottlenecks.
 
That’s always been a challenge with online reviews. They never give you enough information about their settings to be able to reproduce their results or make educated comparisons. All I can say is that anyone who knows this industry knows that der8auer is a reputable source for products and information.

You’re right, 250W in Cyberpunk is mighty odd. Although, I swear that I’ve seen these kinds of gaming power limits before. Perhaps some major shader compiling thing??
 
I would like to see power draw being mentioned as FPS is, average, max and 99th percentile. I cannot imagine the 14900k running 250 watt during gameplay, and if its only during shader compiling, then they are just insignificant peaks
 
And just like that. Poof! No more bypass mode for you video rendering guys out there.
So now we wait for Intel’s incoming fixes to the microcode in an attempt to improve gaming performance. In related news, Intel is planning 3D cache for 2025. Just not for gamers. You‘ve got to love that one!
 
Yeah, I saw that one. It appears to have a pretty decent impact on performance too, to quite tomshardware:

DLVR is helpful in gaming loads where a 0.3V delta (between Vin and Vcore) nets a 20W power loss (20W = 0.3V x 67A). However, in production workloads where all cores are in full throttle, a 0.4V delta quickly results in over 88W of power lost (88W = 0.4V x 220A)

Intel is ballsing it up
 
Intel is ballsing it up
Yes, it’s hard to imagine how you could alienate your customers more than Intel is trying to do. No bypass mode to help improve efficiency for work related tasks; no 3D cache to improve efficiency for game related tasks. Maybe Intel will get AMD’s best salesman of the year award next year.
 
Back
Top