CUDA jamming up Windows 10 - Can I use NVIDIA and Intel GPUs at same time?

will@combus.net

New member
PRIVATE E-2
Joined
Mar 20, 2017
Messages
5
I recently bought a GT72 6QD DOMINATOR G with the specific purpose of using this notebook for scientific modelling, for simulation of earthquakes and other natural disasters.
I have been creating CUDA/C++ programs which run fine on other notebooks (Toshiba and ASUS) and which each have NVIDIA cards.
The new MSI is theoretically a more powerful machine in all aspects, but unfortunately the notebook doesn't seem to be able to run CUDA code.

When I run identical code on the new MSI and parallel processing starts, I find that the screen jams up and the mouse pointer freezes in place.
My understanding is that the NVIDIA GPU is being used 100% by the scientific model in CUDA and hence has no resources available to Windows 10.
(This is my best guess but I'm not 100% sure that this is the case)

The other notebooks each have an NVIDIA GPU as well as an Intel GPU, and so Windows seems to use the Intel GPU when the NVIDIA GPU is doing the complex processing.
When I look at Device Manager in Control Panel, each of these notebooks shows two GPUs, while Device Manager on the MSI only shows one.
I can press the physical button on the MSI and switch (after reboot) to the Intel GPU, which then shows in Device Manager while the NVIDIA card disappears.
Unfortunately the CUDA code will not run without the NVIDIA card activated.

My question:
Is there any way to have both GPU cards active at the same time on an MSI notebook?

Backup question:
Is there any way to limit the usage of the NVIDIA GPU for CUDA work so that a small percentage of its processing power is reserved for Windows?


Thanks for the help that anyone can provide.
 
will date=1490050818 said:
I recently bought a GT72 6QD DOMINATOR G with the specific purpose of using this notebook for scientific modelling, for simulation of earthquakes and other natural disasters.
I have been creating CUDA/C++ programs which run fine on other notebooks (Toshiba and ASUS) and which each have NVIDIA cards.
The new MSI is theoretically a more powerful machine in all aspects, but unfortunately the notebook doesn't seem to be able to run CUDA code.

When I run identical code on the new MSI and parallel processing starts, I find that the screen jams up and the mouse pointer freezes in place.
My understanding is that the NVIDIA GPU is being used 100% by the scientific model in CUDA and hence has no resources available to Windows 10.
(This is my best guess but I'm not 100% sure that this is the case)

The other notebooks each have an NVIDIA GPU as well as an Intel GPU, and so Windows seems to use the Intel GPU when the NVIDIA GPU is doing the complex processing.
When I look at Device Manager in Control Panel, each of these notebooks shows two GPUs, while Device Manager on the MSI only shows one.
I can press the physical button on the MSI and switch (after reboot) to the Intel GPU, which then shows in Device Manager while the NVIDIA card disappears.
Unfortunately the CUDA code will not run without the NVIDIA card activated.

My question:
Is there any way to have both GPU cards active at the same time on an MSI notebook?

Backup question:
Is there any way to limit the usage of the NVIDIA GPU for CUDA work so that a small percentage of its processing power is reserved for Windows?

Thanks for the help that anyone can provide.

Hi Will,

I cannot provide you with an answer as to whether or not you can get your CUDA/C++ code to run on your GT72 6QD. But there is more than one issue at work here which is making it difficult for you to grasp how your notebook is designed to work. Once you understand how your notebook works, you should probably contact NVidia for further assistance.

The most important thing to understand first is that MSI designs its gaming notebooks for Windows PC gaming. Any user who considers using an MSI gaming notebook for a non-gaming purpose needs to do a careful evaluation before making a purchase because an MSI notebook might not serve the desired purpose. Like you, I purchased a GT-series notebook (GT80 2QE) for a non-gaming purpose. In my case, I'm using it as a desktop replacement for media creation and software development. One of the things I needed was very high data protection and, for me, that begins with mirrored RAID-1 arrays. Yet, even though my GT80 2QE has four M.2 SSD slots that can be combined in a variety of striped RAID-0 configurations, MSI chose to disable all mirrored RAID-1 functions at the hardware level (they removed the options from Intel's IRST firmware). I guess they just figured that gamers don't need mirrored RAID arrays. This would have been a deal-breaker for me but I found a workaround and was able to make the purchase. That's just one example of many that I had to deal with before making my purchase. The reason I'm explaining this is so you'll know not to expect features that other notebook manufacturers provide. If a feature is deemed by MSI to be irrelevant to gaming, they may omit it even though the hardware is fully able to do it.

Now let's get to the video systems. At present, MSI is offering three different video configurations in their gaming notebooks. All of their notebooks have an integrated GPU (iGPU) in the Intel i7 CPU as well as one or more discrete GPUs (dGPUs). But the way the video is configured and the features made available by MSI cause the iGPU and dGPU(s) to function in very different ways---in some cases one may not even be operational under any circumstances.

1 - The most common configuration has one iGPU and one dGPU. They run simultaneously and NVidia Optimus software automatically switches between them for the video processing. In this configuration, the Intel iGPU serves as the full time video controller for the display---this is true even when the NVidia dGPU is handling the video processing. But Optimus chooses which GPU will do the actual video processing. It makes this decision based on the video workload, available electrical power, and the user preference (set in the 3D settings of the NVidia Control Panel). In this configuration, both the iGPU and the dGPU will be visible in the Windows Device Manager at the same time. This is what you are used to seeing with notebooks from other manufacturers, too. And this is probably the system that you've successfully run your CUDA/C++ code on.

2 - The least common configuration has one iGPU and supports up to two dGPUs in SLI. This configuration is only offered in MSI's top-of-the-line GT-series notebooks. Because SLI is supported, NVidia Optimus cannot be used (it is not compatible with SLI). Even though your GT72 6QD has only one NVidia dGPU, its motherboard is designed to handle two in SLI and some GT72 models were sold with two in SLI. Because of this, MSI had to provide a different way to switch between the iGPU and dGPU(s). It does it at the hardware level with the "GPU" button and, as you've noticed, it requires a reboot. With these notebooks, only one GPU system is visible at a time. When the Intel iGPU is selected, it will be the only GPU visible in the Windows Device Manager. When the NVidia dGPU(s) are selected, they will be the only GPU(s) visible in the Device Manager. This is the way your GT72 6QD operates. So the answer to your question is: "No, you cannot run the Intel iGPU and the NVidia dGPU at the same time in your notebook."

3 - A third configuration is appearing in some of the newest MSI gaming notebooks. It's been appearing mostly in the VR (virtual reality) models. Some of them have LCD panels that can operate at a higher refresh rate and use NVidia's G-Sync feature. Unfortunately, G-Sync is not compatible with Optimus or the Intel iGPU at all. So notebooks that have G-Sync are forced to use the NVidia dGPU full-time and the Intel iGPU is not available. The Windows Device Manager will only see the dGPU.

Back to your problem, it seems to me that you should be able to get your CUDA/C++ code to run on an NVidia dGPU when the Intel iGPU is not available. My guess is that it's just a matter of knowing how to configure the NVidia driver properly for it. Or it may require a programming technique. Just remember that there are lots of desktop computers that do not have an iGPU---they only have a dGPU. So there must be a way for an NVidia dGPU to divide the workload properly. That's why I think your best source for further help will be NVidia. I'm not impressed with MSI Support and your problem is outside their scope. And, in case you're unaware, this is a volunteer user-to-user forum. MSI does not participate here.

As for Win 10 having a part in the problem---who knows? In my opinion, Win 10 has been the worst version of Windows in a looong time and my experience goes way back to Win 2 in the early 1980s. I'm not using it on any important computer. In fact, the only place where I'm using it is on a testbed system for compatibility testing of my own code.

Kind regards, David
 
Hi David

Thanks for your incredibly detailed response to my post.

From everything you say, it seems I bought the wrong laptop for my purposes.
In fairness, the detail would have been beyond the salesperson so I don't begrudge them.
Unfortunately though it would also be beyond the consumer law regulator (I'm in Australia) so I doubt I would have an easy time pushing for a refund.

You give me some hope with your comment: "you should be able to get your CUDA/C++ code to run on an NVidia dGPU when the Intel iGPU is not available."
To be a bit more specific on my issue, I have found that CUDA runs of smaller duration within each kernel call will actually run without a problem.
It seems to jam up in the intensive loop within my simulation code when I run larger analyses.
There seems to be a tipping point and Step 1 will be for me to try to work out where that is on my current setup, with the NVIDIA card enabled of course.
Maybe that tipping point, in terms of number of threads, total memory used or some other measure, will give me a clue to a solution.

Then I guess Step 2 will be to have a better look at the NIVIDIA documentation and see if there is a way to lock in proportion of the GPU that is dedicated to each process.
My code is clearly taking over and using 100%, so if I can limit that then maybe I can get it working.
If you have suggestions on Step 2 please let me know.

Thanks again for the help.

Cheers

Will
PS   FYI, I am an actuary and geophysicist based in rural NSW simulating natural disasters and calculating financial loss for the reinsurance industry.
 
will date=1490070678 said:
... and Step 1 will be for me to try to work out where that is on my current setup, with the NVIDIA card enabled of course.
Maybe that tipping point, in terms of number of threads, total memory used or some other measure, will give me a clue to a solution. ...

Hi will,

Perhaps I'm showing my ignorance regarding CUDA programming, but I still do not believe that the above should be "Step 1". If I were in your shoes, I'd try to get more information before jumping to a conclusion. To my mind, Step 1 should be contacting NVidia and experimenting with your dGPU settings. Regarding the settings, have you tried manually locking your dGPU as your PhysX processor via NVidia Control Panel > Configure Surround, PhysX? If not, try it. If it is set to "auto-select" it might be trying to jump to the CPU (iGPU) and, since the iGPU is not available, it crashes. On my GT80 2QE, the NVidia Control Panel still allows the CPU to be selected even though my Intel iGPU is not visible to the system. The only choice it should offer is to allow me to select which of my two dGPUs I want to dedicate to PhysX processing (remember, my system has two NVidia dGPUs in SLI). It tells me that NVidia's control panel might allow a wrong decision to be made since it's not automatically excluding the CPU (iGPU) and "auto-select" as a choice as it should.
 
 
[quote author=will]Then I guess Step 2 will be to have a better look at the NIVIDIA documentation and see if there is a way to lock in proportion of the GPU that is dedicated to each process.
My code is clearly taking over and using 100%, so if I can limit that then maybe I can get it working.
If you have suggestions on Step 2 please let me know. ...[/quote]

I would also check the forums where CUDA programmers hang out. I'm sure this is a common problem. Like I wrote before, there are lots of high-performance desktops with an NVidia dGPU and no iGPU in their Intel CPU. So programmers have got to have a way of dealing with this.
 
 
[quote author=will]... FYI, I am an actuary and geophysicist based in rural NSW simulating natural disasters and calculating financial loss for the reinsurance industry.[/quote]

You've certainly got my respect!!! As I understand it, actuaries are some of the smartest cats on the planet!!! Makes me envious. I've been privileged to work alongside some incredibly brilliant practical mathematicians, engineers and physicists dealing with inter-dimensional transforms. A couple of decades ago, it blew open a whole new way of making acoustical measurements in my field of pro audio. We literally learned to map our data into another dimension (an abstract mathematical space) where its unique rules allowed us to separate amplitude and time, then map back to our dimension with results that seemed magical. All of a sudden we could make anechoic acoustical measurements in a reverberant environment. But the details of the mathematical transforms were way, way over my head. The best I could do was follow the thinking philosophically (that's what math eventually boils down to, doesn't it?). I've always wished that I could keep up with those guys who can think in that language.

I hope you're successful with your CUDA coding and can follow up here afterward just so we know that you were.

Kind regards, David
 
Back
Top