Rtx 4090 suprim, sudden black screen, fans max rpm

Joined
Nov 6, 2022
Messages
4
Hello,

I recently bought the msi suprim X 4090 (air cooled). After using it for a week or 2, I ran into some serious issues, being:
- my pc black screens (both monitors lose display connection and go standby) either just after startup on desktop, or 5-10 mins into any game.
- the gpu fans go to max rpm
- the rest of the pc remains functional, as I can still use keyboard shortcuts and hear sounds of applications in the background.
- this stays till either an automatic restart in 5-10 mins or a forced restart by holding the power button.
- afterwards it gracefully starts without any motherboard error LEDs, or beeps. Until the moment arises that it black screens again.

During the time that it does run, I luckily managed to do some benchmarking in 3DMARK just fine, and monitor the results.
- average temp of 50°c
- power input between 50-400W (depending on usage)
- fps and visuals all appear to be good and it does what a 4090 is supposed to do, be an absolute beast without too much power usage it seems.

Up until the point where it black screens again. I'm trying to rule out any software or settings related issue before I sent it for repair, so I hope you guys can help me resolve the issue, maybe I'm forgetting something or maybe there is knowledge seeing as there are multiple issues arising with the new 4090s.

Few more notes:
- I'm running a ryzen 9 5950x, ROG STRIX X570-F GAMING, 2x firecuda 2TB M.2, 64gb gb ballistix ram at 3600mhz, ROG Thor 1200W platinum, open air case with plenty of cooling. (in the middle of a windows reset, so I don't have the exact info, will sent later if need be, hasn't black screened yet during this setup)
- used to run a msi gtx 1080 just fine with above mentioned hardware but an older power supply.
- running latest bios, drivers, windows updates, etc.
- 2 monitors a Dell office grade monitor, and Samsung Odyssey Neo G7 32" at 4K with supplied display cable.

If anyone is familiar with the issue, or has a rough idea of the root of my problem, I'd be grateful to hear.
 
Have you checked Event Viewer to see if there were any stop codes or something there?
 
Not sure how to read it, but i have no hardware events, I do have some administrative event id's like 41, 4502, but that was probably me shutting it down and restarting, but nothing else.
Realistically, I'd just be checking the Windows Logs -> System section to see what it says there roughly at the time the problem occurs. That might glean some more information on the problem.

Outside of that, how are you providing power to the GPU?
What kind of connector is it? Can you maybe provide a picture of it?
 
Realistically, I'd just be checking the Windows Logs -> System section to see what it says there roughly at the time the problem occurs. That might glean some more information on the problem.

Outside of that, how are you providing power to the GPU?
What kind of connector is it? Can you maybe provide a picture of it?
Cheers for the reply, I'll have a more detailed look later. The gpu connector can be discribed as a 16pin +4 smaller pins on top of it. The cable splits off to 4 standard 8pin connectors that go into the pcie slot connector on the psu, (modular). I've seen people report melting of said cable head, but mine appears in good condition so far.
 
Can you Run MSI Centre and check for a V-Bios update
I was having a reset problem with my AMDGPU was caused by a Logitrc Wireless dongle that had developed a fault took me a while to narrow that problem down as it was working normally but was causing a reset with AMD GPU
This would be the easiest place to find as it is listed in the time of event order
1667789707521.png
 
After a windows reset and as soon as i installed the Nvidea drivers the problem arose again.
shortly after that i restarted again, but this time it showed all kind of visual glitches and color flashing.
I've decided to send the 4090 for a checkup and see if it is indeed a hardware malfunction, thanks for all the help so far.
 
That does sound like a GPU issue honestly.

The thing with the connectors is they've been finding that many of the 16 pin connectors going into the GPU are melting due to them being bad/poorly designed/poorly made connectors. Figured that since you seemed to be having issues with your GPU, that might have something to do with it, depending on who made the connector, etc.....
 
After a windows reset and as soon as i installed the Nvidea drivers the problem arose again.
shortly after that i restarted again, but this time it showed all kind of visual glitches and color flashing.
YA sounds like V-Ram has a problem
Strange it did not show up sooner with the Artefacting on the screen.
 
Hello,

I recently bought the msi suprim X 4090 (air cooled). After using it for a week or 2, I ran into some serious issues, being:
- my pc black screens (both monitors lose display connection and go standby) either just after startup on desktop, or 5-10 mins into any game.
- the gpu fans go to max rpm
- the rest of the pc remains functional, as I can still use keyboard shortcuts and hear sounds of applications in the background.
- this stays till either an automatic restart in 5-10 mins or a forced restart by holding the power button.
- afterwards it gracefully starts without any motherboard error LEDs, or beeps. Until the moment arises that it black screens again.

During the time that it does run, I luckily managed to do some benchmarking in 3DMARK just fine, and monitor the results.
- average temp of 50°c
- power input between 50-400W (depending on usage)
- fps and visuals all appear to be good and it does what a 4090 is supposed to do, be an absolute beast without too much power usage it seems.

Up until the point where it black screens again. I'm trying to rule out any software or settings related issue before I sent it for repair, so I hope you guys can help me resolve the issue, maybe I'm forgetting something or maybe there is knowledge seeing as there are multiple issues arising with the new 4090s.

Few more notes:
- I'm running a ryzen 9 5950x, ROG STRIX X570-F GAMING, 2x firecuda 2TB M.2, 64gb gb ballistix ram at 3600mhz, ROG Thor 1200W platinum, open air case with plenty of cooling. (in the middle of a windows reset, so I don't have the exact info, will sent later if need be, hasn't black screened yet during this setup)
- used to run a msi gtx 1080 just fine with above mentioned hardware but an older power supply.
- running latest bios, drivers, windows updates, etc.
- 2 monitors a Dell office grade monitor, and Samsung Odyssey Neo G7 32" at 4K with supplied display cable.

If anyone is familiar with the issue, or has a rough idea of the root of my problem, I'd be grateful to hear.


Hmmmm, Interesting I have the following build:

Case: Fractal Torrent RGB
CPU: Ryzen 9 7950x
GPU: Msi Suprim X RTX 3090
RAM: Corsair Vengeance RGB 4x 16gb DDR5 6000Mhz
Memory: Western Digital SN850 1Tb
:Western Digital AN1500 4Tb
Cooler: Corsair H170;
PSU: EVGA G+ 1600
Monitors: 2x Asus 1080p Screen & 1x Samsung Odyssey G7 4K
Windows 11

So ive been getting black screens on all 3 of my monitors anytime I update my nvidia driver to any version after 517.48. then it being unusable, the only thing that works is if I unplug any 1 of the 3 monitors.. any 2 monitors in any combination works fine on all of the drivers after 517.48
the moment I plug a third monitor in I get that standby message as you described.
 
I think this issue is cable or PSU related - I have the same Suprim X (air) and it was working perfectly with the supplied 12vhpwr adaptor, but after switching to a cablemod cable (12vhpwr to 4x8pin PSU plugs) I started randomly getting this issue. The issue went away after switching back to the supplied adapter and native PSU cables.
So... it's likely a poor connection to the PSU or card, damaged cable, or PSU that can't handle the transients.
 
I think this issue is cable or PSU related - I have the same Suprim X (air) and it was working perfectly with the supplied 12vhpwr adaptor, but after switching to a cablemod cable (12vhpwr to 4x8pin PSU plugs) I started randomly getting this issue. The issue went away after switching back to the supplied adapter and native PSU cables.
So... it's likely a poor connection to the PSU or card, damaged cable, or PSU that can't handle the transients.

That's interesting because I'm convinced at the moment that the Nvidia adapter was potentially causing this issue for me. Not only would it happen randomly, but I could cause it to trigger by moving the adapter:
(Gigabyte 4090 Gaming OC)

The card was reseated over a dozen times, cables unplugged/replugged, etc. - there was no connection issue that I could control, everything was rock solid. Another thing that leads me to believe it was the adapter is that sometimes I'd boot and the power limit would be stuck at 100%. After shutting down and moving the cables, 133% would be unlocked again. All cables were clicked in and flush.

I'm wondering if there was some kind of faulty internal connection because it definitely was behaving like it. Regardless, I've been using the Moddiy cable for about 3 days now and haven't had any of these issues so far - I've been pushing the GPU hard trying to get it to trigger, but no luck. I'm hoping the problem is solved but only time will tell.

Also, I thought it might be the PSU as well, but one time it happened when I was on desktop. And if it was caused by a transient spike you'd expect the whole comp to shut down. I now use the log feature in HWinfo every time I boot, might be able to glean something from it if it happens again.

*I was having another separate issue where my main DisplayPort monitor was blacking out when Windows turned it off, which required a system reboot to turn back on, while my HDMI monitor didn't. A few people on Reddit also had the problem, and disabling deep sleep on the monitor has fixed it. I don't recall my 3080 ever having an issue turning the monitor back on like this.
 
Last edited:
I'm having this exact same issue. MSI Suprim 4090, latest Nvidia drivers and vga bios from MSI Centre. Running it on a Aorus X570 Master mobo with Ryzen 5900x. PSU is Thermaltake Toughpower GF3 1200w and I've got the Suprim powered via a native 12vhpwr cable that came with the PSU (i.e. 12vhpwr to 12vhpwr). I have 3 monitors connected to the 3 DP ports, and my OLED LG C1 on the hdmi. Only ever use the 3 DP desktop or 1 HDMI for couch gaming. Never 4 at once.
 
Last edited:
I'm having this exact same issue. MSI Suprim 4090, latest Nvidia drivers and vga bios from MSI Centre. Running it on a Aorus X570 Master mobo with Ryzen 5900x. PSU is Thermaltake Toughpower GF3 1200w and I've got the Suprim powered via a native 12vhpwr cable that came with the PSU (i.e. 12vhpwr to 12vhpwr). I have 3 monitors connected to the 3 DP ports, and my OLED LG C1 on the hdmi. Only ever use the 3 DP desktop or 1 HDMI for couch gaming. Never 4 at once.
I have the exact same issue on my 4090 FE. It only started happening over the last few days. I'm running z690 (Gigabyte) with a 12900K with 3 display port 4k@160hz monitors.
The game that I can consistently get to trigger the issue is overwatch 2. Running on a 1600w corsair PSU with their 4090 cable.

I did the recent 4090 firmware update as I had the issue with bios being blank screen.
 
I seem to have solved it by removing the 12vhpwr cable and reconnecting it. Just on the GPU side. I also removed a cable tie that was holding the 12vhpwr cable near the GPU, giving the cable more freedom to sit naturally.

To me I think this issue is cable based, I wonder if to combat the melting issues (which are seemingly due to poorly seated connections), they've introduced a fail-safe process when a suboptimal connection is detected.

Anyway, hopefully this helps some of you, though I guess it may not help all.
 
I seem to have solved it by removing the 12vhpwr cable and reconnecting it. Just on the GPU side. I also removed a cable tie that was holding the 12vhpwr cable near the GPU, giving the cable more freedom to sit naturally.

To me I think this issue is cable based, I wonder if to combat the melting issues (which are seemingly due to poorly seated connections), they've introduced a fail-safe process when a suboptimal connection is detected.

Anyway, hopefully this helps some of you, though I guess it may not help all.

At least 4 people I've seen including myself who are connecting this to a potential cable problem - over 2 weeks and I still haven't replicated the issue with the new Moddiy cable.
 
I seem to have solved it by removing the 12vhpwr cable and reconnecting it. Just on the GPU side. I also removed a cable tie that was holding the 12vhpwr cable near the GPU, giving the cable more freedom to sit naturally.

To me I think this issue is cable based, I wonder if to combat the melting issues (which are seemingly due to poorly seated connections), they've introduced a fail-safe process when a suboptimal connection is detected.

Anyway, hopefully this helps some of you, though I guess it may not help all.
I was able to get mine working doing the same thing however my cable is the new 12vhpwr from Cablemod. They are releasing some right angle adapters which I hope will eliminate this problem for good.
 
Hello,

I just had the exact same problem on my 4090. I was using the cable mod cable and it blacked out on me while playing “squad”. The fans went to max rpm as well.

I unplugged the cablemod cable at the gpu end and plugged it back in. Seems to be working ok for now. I think I didn’t quite have it secured enough.

my question is did I cause damage to gpu if this was the case? I saw someone said earlier that they think the gpu doing this is a fail safe, but I just hope I didn’t damage anything after just getting it.

Thanks
 
Apparently, if GPU loses contact with Sens pins the four tiny ones once powered up it will shut down
 
Hello,

I recently bought the msi suprim X 4090 (air cooled). After using it for a week or 2, I ran into some serious issues, being:
- my pc black screens (both monitors lose display connection and go standby) either just after startup on desktop, or 5-10 mins into any game.
- the gpu fans go to max rpm
- the rest of the pc remains functional, as I can still use keyboard shortcuts and hear sounds of applications in the background.
- this stays till either an automatic restart in 5-10 mins or a forced restart by holding the power button.
- afterwards it gracefully starts without any motherboard error LEDs, or beeps. Until the moment arises that it black screens again.

During the time that it does run, I luckily managed to do some benchmarking in 3DMARK just fine, and monitor the results.
- average temp of 50°c
- power input between 50-400W (depending on usage)
- fps and visuals all appear to be good and it does what a 4090 is supposed to do, be an absolute beast without too much power usage it seems.

Up until the point where it black screens again. I'm trying to rule out any software or settings related issue before I sent it for repair, so I hope you guys can help me resolve the issue, maybe I'm forgetting something or maybe there is knowledge seeing as there are multiple issues arising with the new 4090s.

Few more notes:
- I'm running a ryzen 9 5950x, ROG STRIX X570-F GAMING, 2x firecuda 2TB M.2, 64gb gb ballistix ram at 3600mhz, ROG Thor 1200W platinum, open air case with plenty of cooling. (in the middle of a windows reset, so I don't have the exact info, will sent later if need be, hasn't black screened yet during this setup)
- used to run a msi gtx 1080 just fine with above mentioned hardware but an older power supply.
- running latest bios, drivers, windows updates, etc.
- 2 monitors a Dell office grade monitor, and Samsung Odyssey Neo G7 32" at 4K with supplied display cable.

If anyone is familiar with the issue, or has a rough idea of the root of my problem, I'd be grateful to hear.
I experienced this since i got my MSI Gaming Trio RTX 4090 and this is exactly what fixed it.

Short Version: I changed the CableMod 4x8 12VHPWR with an Amazon FasGear PCIE 5.0 GPU 12VHPWR 3x8 pin 6+2 cable
Full Version:
I recently purchased two 12VHPWR cables, the first is a Basic E-Series 12VHPWR for EVGA G7,G6,G5,G3,G2,P2,T2 and a E-Series Pro Modflex for the same PSU.
My PSU is a EVGA Supernova 1300 G+ and my GPU is an MSI Gaming TRIO RTX 4090
I first tried the E-Series Pro and Im getting an issue where while the computer is either on idle or playing games the monitor screen will go black and the GPU fans will go to 100%, you can hear windows still running in the background but the only way to get it back up is to power cycle the PC. Sometimes it can take a few hours for this to happened but when it does, it repeats the same problem more consistently until i leave the computer turned off for a couple of hours, this allows me to play or use the computer for a few hours until it happens again. I spent weeks researching this issue online, i have done every possible troubleshooting, example ( updating motherboard bios, removing GPU drivers with DDU and selected the latest GPU drivers. Reverting back to older drivers, removing any possible conflicting problem. Re-seating the cables and even removing the GPU and installing it again, BUT nothing worked. I also tried the Basic E-Series cable and same issue. I then bought an Amazon *FasGear PCIE 5.0 GPU 12VHPWR 3x8 pin (6+2) cable and after using it, I have not encountered the problem since then
 
Back
Top